Paul Bernays Lectures

Last week, I had the honor of giving the annual Paul Bernays Lectures at ETH Zürich. My opening line: “as I look at the list of previous Bernays Lecturers—many of them Nobel physics laureates, Fields Medalists, etc.—I think to myself, how badly did you have to screw up this year in order to end up with me?”

Paul Bernays was the primary assistant to David Hilbert, before Bernays (being Jewish by birth) was forced out of Göttingen by the Nazis in 1933. He spent most of the rest of his career at ETH. He’s perhaps best known for the von Neumann-Bernays-Gödel set theory, and for writing (in a volume by “Hilbert and Bernays,” but actually just Bernays) arguably the first full proof of Gödel’s Second Incompleteness Theorem.

Anyway, the idea of the Paul Bernays Lectures is to rotate between Bernays’s different interests in his long, distinguished career—interests that included math, philosophy, logic, and the foundations of physics. I mentioned that, if there’s any benefit to carting me out to Switzerland for these lectures, it’s that quantum computing theory combines all of these interests. And this happens to be the moment in history right before we start finding out, directly from experiments, whether quantum computers can indeed solve certain special problems much faster.

The general theme for my three lectures was “Quantum Computing and the Fundamental Limits of Computation.” The attendance was a few hundred. My idea was to take the audience from Church and Turing in the 1930s, all the way to the quantum computational supremacy experiments that Google and others are doing now—as part of a single narrative.

If you’re interested, streaming video of the lectures is available as of today (though I haven’t watched it—let me know if the quality is OK!), as well as of course my slides. Here you go:

Lecture 1: The Church-Turing Thesis and Physics (watch streaming / PowerPoint slides) (with an intro in German by Giovanni Sommaruga, who knew Bernays, and a second intro in English by Renato Renner, who appeared on this blog here)

Abstract: Is nature computable?  What should we even mean in formulating such a question?  For generations, the identification of “computable” with “computable by a Turing machine” has been seen as either an arbitrary mathematical definition, or a philosophical or psychological claim. The rise of quantum computing and information, however, has brought a fruitful new way to look at the Church-Turing Thesis: namely, as a falsifiable empirical claim about the physical universe.  This talk seeks to examine the computability of the laws of physics from a modern standpoint—one that fully incorporates the insights of quantum mechanics, quantum field theory, quantum gravity, and cosmology.  We’ll critically assess ‘hypercomputing’ proposals involving (for example) relativistic time dilation, black holes, closed timelike curves, and exotic cosmologies, and will make a 21st-century case for the physical Church-Turing Thesis.

Lecture 2: The Limits of Efficient Computation (watch streaming / PowerPoint slides)

Abstract: Computer scientists care about what’s computable not only in principle, but within the resource constraints of the physical universe.  Closely related, which types of problems are solvable using a number of steps that scales reasonably (say, polynomially) with the problem size?  This lecture will examine whether the notorious NP-complete problems, like the Traveling Salesman Problem, are efficiently solvable using the resources of the physical world.  We’ll start with P=?NP problem of classical computer science—its meaning, history, and current status.  We’ll then discuss quantum computers: how they work, how they can sometimes yield exponential speedups over classical computers, and why many believe that not even they will do so for the NP-complete problems.  Finally, we’ll critically assess proposals that would use exotic physics to go even beyond quantum computers, in terms of what they would render computable in polynomial time.

Lecture 3: The Quest for Quantum Computational Supremacy (watch streaming / PowerPoint slides)

Abstract: Can useful quantum computers be built in our world?  This talk will discuss the current status of the large efforts currently underway at Google, IBM, and many other places to build noisy quantum devices, with 50-100 qubits, that can clearly outperform classical computers at least on some specialized tasks — a milestone that’s been given the unfortunate name of “quantum supremacy.”  We’ll survey recent theoretical work (on BosonSampling, random circuit sampling, and more) that aims to tell us: which problems should we give these devices, that we’re as confident as possible are hard for classical computers? And how should we check whether the devices indeed solved them?  We’ll end by discussing a new protocol, for generating certified random bits, that can be implemented almost as soon as quantum supremacy itself is achieved, and which might therefore become the first application of quantum computing to be realized.

Finally, thanks so much to Giovanni Sommaruga and everyone else at ETH for arranging a fantastic visit.

43 Responses to “Paul Bernays Lectures”

  1. William Hird Says:

    “how badly did they screw up to wind up with me ? ”
    Hi Scott, you are too modest, are you kidding, who better than you to give this talk? Yes, you can name names 🙂

  2. Scott Says:

    William #1: I mean, no one better than me to give these specific lectures, but surely many better people to give other possible Bernays Lectures. 🙂

  3. Andrei Says:

    Scott #2:

    Well there’s your answer! Maybe they just thought the Bernays series was ripe for your brand of quantum computation.

  4. Ian Says:

    These were great, thanks so much for sharing!

  5. Paul Topping Says:

    I’ve watched the first lecture and will watch the other two soon. I was wondering if you think there are kinds of computation more powerful than that of a Turing Machine. You mention hypercomputation but if you gave an opinion on whether it exists, I missed it. The Wikipedia page on “Hypercomputation” has a great quote attributed to Martin Davis:

    Hypercomputation is little more than: ” if non-computable inputs are permitted then non computable outputs are attainable.”

    Is this your view as well?

  6. Scott Says:

    Paul #5: In some sense, the entire point of the lecture was to give an answer to the question, which I did in the conclusion when I said that the (physical) Church-Turing Thesis still seems to be on extremely solid ground. Or in other words: that physical hypercomputation seems to be impossible. As I covered in the talk, the arguments to the contrary strike me as extremely flimsy—they either
    (1) cheat with definitions in uninteresting ways (what I called “theft over honest toil” in the talk), or else
    (2) completely disregard what we learned from black hole thermodynamics about fundamental limits on computation and information storage imposed by the Planck scale.
    And yes, Martin Davis’s quip strikes me as a sufficient response to many of the arguments in class (1) (like Wegner and Goldin’s).

  7. p Says:

    Hi Scott, permit me a naive question: Could one use a small quantum computer to verify the computation of a larger one, in the same way that a classic computer can verify the computation of a small quantum computer (taking much more time)? Thanks for the talks!

  8. Scott Says:

    p #7: That depends on the problem, but we don’t think there’s any general reason for a small quantum computer to be useful in verifying a larger one. Unless the small QC was actually exchanging qubits with the large QC in order to verify it—in which case, it could run the protocol of Aharonov-BenOr-Eban, but in which case, the other protocol of Broadbent-Fitzsimons-Kashefi showed that it doesn’t even need to be a QC anyway; a classical computer that can send polarized photons into a fiber-optic cable suffices.

  9. Ted Says:

    Fantastic lectures Scott, thanks so much for posting.

    Couldn’t Google largely sidestep all the theoretical issues around the classical hardness of sampling, by simply holding a public competition? At a certain pre-announced time, an independent third party publicly announces the random circuit instance C, then *anyone* has a short window to publicly submit a sample of some fixed number of bit strings. Then there’s a lengthy judging period when a classical supercomputer calculates the cross-entropy of each submitted sample, and Google pays $1 million to whoever submitted the sample with the highest cross-entropy. If all goes well, their own submission is the only sample whose cross-entropy is significantly above k/2^n, and no money changes hands.

    (There may be more submissions than can be feasibly checked, but there are ways around that – e.g. an auction to pay a retroactive entry fee which caps the number of eligible entries to the “best” ones. Your complexity-theory results suggest that the probability of beating Google would be low enough that only a negligible entry fee would be required. You’d want to give plenty of preparation time before starting the competition, and set the prize high enough to motivate the best classical simulation people to compete – something that D-Wave has conspicuously never done. But the real motivation would probably be the publicity of publicly beating Google.)

    My proposal doesn’t answer the theoretically interesting question “On complexity-theoretic grounds, could any classical computer feasibly perform this calculation?”. Instead, it answers the theoretically uninteresting question “At this particular moment in time, can any actual existing classical computer perform this calculation?”. But it has several advantages: (a) it doesn’t rely on (although it is motivated by) any unproven conjectures from complexity theory, so it is arguably a more convincing demonstration of the “practical” version of quantum supremacy. It’s also much easier for laypeople to understand, and so arguably better publicity for Google if they win. (b) Everything is totally public and externally verifiable. In particular, it removes any suspicion that Google cheated and chose a non-random circuit instance with some hidden structure that they’ve learned to simulate classically. (c) It removes the need to choose a semi-arbitrary value of b > 1 ahead of time. Of course, they can calculate the value of b after the fact for each submission, and if Google’s is the only submission with b significantly greater than 1, then that would simultaneously provide the “theoretical” evidence as well.

  10. cb Says:

    Can the certified random bits in the 3rd lecture not only be certified as random but also certified as coming from a particular trusted source, were the configuration of the circuit used in the sampling process to be kept secret? In practice how constrained or how large is the space of possible configurations to the circuit?

  11. Paul Topping Says:

    Scott (#6): I guess I didn’t equate the validity of Church-Turing with the existence/non-existence of other kinds of computation. Would the existence of some other kind of computation really disprove it? To be clear, I’m not trying to claim such a computation exists.

  12. Scott Says:

    Ted #9: Google will basically do as you suggested. That is, they’ll publish the sampling challenges online, so it will be open to any other research group to demonstrate a faster classical simulation (and indeed, groups at IBM and Alibaba have already put out papers in this direction, pushing the frontier of how many qubits they can simulate classically). This is an extremely important complement to complexity-theoretic hardness arguments, and is similar to what people do in applied cryptography.

  13. Scott Says:

    Paul #11: The entire position I took in the talk is that today, the Church-Turing Thesis is most fruitfully understood as a falsifiable empirical claim—namely, that any physical system can be simulated by a Turing machine to any desired precision. Under that conception of the thesis, yes, a ‘hypercomputer’ would disprove it, by definition.

  14. Scott Says:

    cb #10: If you wanted to verify the source of the random bits, the obvious way to do that would just be to append a conventional digital signature to them.

    If we had a system with (say) 50 qubits and 500 gates, the number of possible challenges would exceed 10500. In particular, enough that no lookup table caching deterministic responses to the challenges could fit in the observable universe. (Although in practice, one would probably generate the challenges pseudorandomly, in which case the number of possible challenges would “only” be exponential in the seed length of the PRG.)

  15. New top story on Hacker News: Paul Bernays Lectures – Golden News Says:

    […] Paul Bernays Lectures 4 by weinzierl | 0 comments on Hacker News. […]

  16. Jacob Says:

    Great series of lectures! Very exciting time for quantum computing.

    I have a small question about ‘hyper computing’. You said that you don’t agree with Deutsch’s claim that different manifestations of physics could lead to different (more powerful) notions of computing. How would a Zeno computer not qualify here? Do you think it is even impossible in theory to construct a theory of physics that could include a Zeno computer?

    Also separately, in Deutsch’s book I believe he conjectures some sort of Zeno computer that could possibly exist in a big rip/big crunch scenario, where the metric of the universe goes to zero or infinity in finite time. He uses this as a way to give the possibility the humans could ‘live forever’ by uploading there consciousness onto these infinitely fast Zeno computers.

    So I guess my question is in 2 parts. Is a big rip/big crunch Zeno computer ruled out by current physics? And could any system of physics exist at all that could give rise to a Zeno computer (maybe physics ‘existing’ is not well defined enough to answer this question)?

  17. Scott Says:

    Jacob #16: Again, yes, I’d say that a Zeno computer is ruled out by current physics, as long as you consider the holographic entropy bound “current” rather than “future” (it’s never been experimentally tested, but looks like an unavoidable conclusion of GR combined with QM).

    If our world had had only QM and no gravity, or only gravity and no QM, then for all I know a Zeno computer might have been possible—but it would depend on the details of that hypothetical world.

    In his book, Deutsch was reporting the speculations of Frank Tipler, which Tipler also relates to his Christian religious beliefs. In 1998, a year after Deutsch’s book was published, Tipler’s cosmology was clearly ruled out by the discovery of the dark energy, which (if it’s as constant as it appears to be) implies that there won’t be a Big Crunch. I believe that Deutsch has accepted this and moved on, though Tipler has not.

  18. Ari Says:

    So is universe a simulation? Are we ems?

    Sabine Hossenfelder did not believe in the simulationability of universe.

    Dear Professor, I request a live video talk with you and Sabine discussing this simulationability of universe. Although only one of your is a physicist.

  19. gentzen Says:

    Great series of lectures! Nice arguments, debates and opinions.

    I first consumed the slides on an ipad, but there were annoyingly many ‘?’. So I downloaded them on my laptop, and found out that most of the slides contain nice animations. So probably your decision to not provide pdfs of the slides was a good idea, because otherwise I would not have discovered the animations.

  20. Concretion Says:

    The CTT abstracts from the finiteness of space, and that has proven really useful. It does not abstract from the finiteness of time (in the sense of allowing infinite number of compute steps). Is there a neat explanation why the CTT treats space and time differently? I conjecture that this is because space can be reused, while time can’t, but I’m not sure.

  21. Scott Says:

    Ari #18: If you ask, could the universe be simulated by a Turing machine to any desired precision—then as I said, I think the evidence strongly points to “yes” and that that’s a profound fact about our world.

    If you ask, could we get direct evidence of ‘simulators,’ like a bug or an error message—well, you can place your own bet, but I’d personally place this in the same class with any other religious hypothesis, like Armageddon or the Second Coming or the Moshiach. I don’t live my life in expectation of such things. 🙂

    If, finally, you ask: are we in a simulation even if no such evidence can ever be obtained—well, that’s a question whose answer, by definition, can have no effect on any of our explanatory accounts of the world. So the more we can train ourselves to reformulate the question more productively, the better off we’ll be.

    I’ve had many exchanges with Sabine over the years. There’s probably some daylight between the answers I gave above and her answers, but I don’t remember what it is.

  22. Scott Says:

    Concretion #20: I don’t see that the Church-Turing Thesis actually treats time and space differently in the manner you say (even though they are, of course, different). In principle, a Turing machine could just move right on its tape head forever, traversing an infinite amount of both time and space. On the other hand, in order for a function f to be “computable”—and this is really what the thesis is about—there has to be a Turing machine for f that halts on every input, having used only a finite amount of both time and space.

  23. cb Says:

    Scott #14: I guess the circuit configuration must be revealed anyways, to anyone wishing to certify the randomness– is that correct? As in, to apply the inequality they have to know the source distribution which is really just a specification of the circuit. In that case I really can’t think of anything interesting about trying to keep the circuit configuration secret for a period of time.

    Aside from secrecy, is there anything at all consequential about the choice of circuit configuration?

    Since the samples are certifiably random, I guess the only thing that remains is the question of how the non-uniform source distribution ultimately interacts with a chosen randomness extraction process. I’m not familiar enough myself, but Wikipedia says:

    >”there is no single extractor that has been proven to produce truly random output from any type of weakly random source”

    So I suppose someone could fear that a circuit configuration could be cleverly chosen such that it exposes a backdoor when coupled with a popular randomness extraction procedure? (Of course when I say backdoor I just mean rendering the randomness extraction useless in the worst-case, so that the output bits have the same bias or entropy as the original sampled bits– not that they are somehow completely de-randomized.)

  24. gentzen Says:

    Martin Davis’s actual quip from The Myth of Hypercomputation reads:

    When the claims are viewed critically, it is seen that they amount to little more than the obvious comment that if non-computable inputs are permitted, then non-computable outputs are attainable.

    I had hoped that he was hinting at Chaitin’s ingenious construction by this, but apparently the proposals for hypercomputation he had to refute were less illuminating. Still, Martin Davis’s article itself is nicely written, and makes similar arguments like Scott.

    Anyway, that quote reminded me of what I tried to write in January 2017, but then decided to not even try: One part of the readers would just understand it without much explanation, and the other part would have a hard time even if I managed to explain it clearly. So here is the version without any explanation or elaboration (since I never wrote one):

    Say you buy a brand new TM with oracle for HALT from the company D-Wine. How can you be sure that they didn’t cheat you? They assure you that it passed both their PA and ZFC test suites. After you receive the machine, you run your Woodin test suite on it, and it fails. Can you conclude that the machine is rubbish?

    Assume D-Wine now has a new internet offering, which they call “book of arithmetic”. It actually seems quite cheap, only 1 cent per gigabyte. And apparently the information is highly compressed, using state of the art algorithmic information theory. Their sales representative said that it would enable you to decide arbitrary arithmetic sentences, if you had sufficient computational resources to decode the compression. For that he recommended their famous TM with oracle for HALT.

  25. Scott Says:

    cb #23: That’s correct, the verifier knows the circuit.

    What’s important about the circuit is mostly just that it gets drawn from a distribution for which spoofing is classically hard.

    While I unfortunately didn’t have time to go into it in this talk, my scheme (like other certified randomness schemes) requires the classical challenger to start with a small seed. There are several reasons, one of which is of course to generate the challenges. But a second reason is that, with a seed, you can use a seeded randomness extractor—for example, the Guruswami-Umans-Vadhan (GUV) extractor—to get out almost all the min-entropy that’s present in your randomness source, regardless of how it’s distributed, and output nearly uniformly random bits.

  26. Seed generator Says:

    Scott #25: In that case, what prevents the seed-generator from being compromised and be used to generate a trivial circuit (that even classical computers can simulate)?

  27. fred Says:

    Scott #21

    If we accept the following two statements:

    1) realities similar to ours can be simulated with a Turing machine.

    2) realities similar to ours naturally assemble Turing machines (in our case, life appeared, intelligent life appeared, computers appeared, VR appeared, …).

    It seems then unavoidable that most realities similar to ours would be simulations.

    Also, if we ever build general AIs that seem to be conscious, the above argument would be strengthened because this would suggest that a simulation and what’s being simulated are indistinguishable from a subjective point of view.

  28. Eric Says:

    Dear Scott,

    I was attending the first lecture but did not quite make it through: The constant deluge of `ah’-pauses made for a very annoying listening experience. While I believe that your writing is at the top of what is offered in terms of clarity and grace, I would like to leave this comment as (implicit) constructive criticism for you to improve your lecturing style.

  29. Scott Says:

    Seed generator #26: Given your handle, we can’t just use you? 🙂

    Seriously, as long as we trusted the classical code that was generating the challenges, a seed would in principle only need to be generated once—from the weather, quasars, the Dow Jones average, radioactive decays, anything—in order to provide a stream of challenges that were unpredictable to the QC for decades, and hence true random bits from the QC for decades.

    It’s true, though, that how to delegate trust among all the parties involved is one of the biggest questions with a scheme like mine. Classical cryptographers will probably have more to say about it than I will.

  30. Scott Says:

    Eric #28: Experience shows that the only way for me even to approximate the diction you want is literally to write down my entire talk in advance—like a humanities professor, or a journalist or politician reading from their TelePrompTer, or Stephen Hawking. Alas, with two rowdy kids and no nanny to take care of them, it’s a miracle whenever I manage to leave the house to give a talk at all. Which means that you get to experience my talks, if you do, with my natural diction, as if I were telling it to you over lunch. Sorry that that’s not good enough.

  31. Randall K McRee Says:

    Fred #27

    There seems to be quite a confusion here between a simulation and a *prediction*. As Scott pointed out in the Bernays lecture the simulation of a future event supplies you with knowledge of a probability distribution. It does not actually tell you whether the bullet is in the chamber and therefore whether you will be dead, or not. So simulation is not tantamount to *knowing* the future. That isotope still decays randomly in the real world, etc.

  32. Randall K McRee Says:

    Eric #28, Scott #30.
    This is what Toastmasters is for. I went for some time a few years ago. Mostly I was trying to cure the anxiety I felt when talking in front of more than two people. It helped with that, and the ah, huh, stutter as well. It only took a few visits for me to improve immeasurably.
    https://www.toastmasters.org/Find-a-Club/00004256-austin-toastmasters-club

  33. Scott Says:

    Randall #32: But I don’t have speaking anxiety. Probably 5000 other anxieties, but not that one! 🙂 I did once try a speech therapist, who didn’t noticeably help, though maybe I could try again. Mostly, though, I already know what I’d need, for those listeners who actually care about this, and it’s simply more time to prepare/practice my talks.

  34. cb Says:

    Scott #25, #29: I was missing the obvious point that every new batch of bits someone requests must be a new challenge-response, where the response comes in a timely fashion after the challenge, otherwise non-certifiably random bits could have been slowly generated on a supercomputer. The circuit specification is the challenge.

    I guess there is another situation though where you just want a good hardware RNG in-house, so– 100% trusted and no challenge-response protocol needed. In that case, it seems to me that you might still prefer one of these classically hard sampling problems over some QC algorithm (or hardware RNG for that matter) that doesn’t have these complexity theory arguments about its classical hardness. For this use-case, you don’t need any seed or anything, and can just use a single circuit configuration for all of time, is that correct? And theoretically that circuit could even be publicly known and it wouldn’t matter (although conservatively you would probably not want it to be public or for it to be leaked), at least insofar as you trust your randomness extractor to eliminate the bias of the source distribution.

  35. Dimitri Says:

    Hi Scott,

    I am a longtime reader of your blog, that’ s my first comment. I live in Switzerland, had I not missed that you were in Zurich, I would have definitely joined your audience. In any case, I would like to express a big THANK YOU for the amazing job your are doing in general in explaining your work in a manner that is understandable to an audience with only a superficial background in theoretical CS, like me (I am a machine learning engineer). I watched yesterday the video streams you posted, I am amazed on how you are able to explain quantum computing in a few slides and to give an overview of cutting-edge theoretical research in a few minutes, casually and with a sense of humor. Just by these video streams and a general CS background, one that cares enough to listen can get a pretty good level of understanding of what QC is about, how and why the field was born and an idea of the engineering challenges that all these companies are trying to overcome. I also admit that for the first time in my life (despite being taught about it in the university) I realized this about the P ?= NP question: It is not that it is the only interesting question that one can ask about what is “efficiently” computable, but it is one of the natural questions one should try to answer first before attempting to see further.

    Following a remark made earlier, I would like to comment that maybe almost everyone (for sure everyone I know, including myself) needs hours and hours of preparation, usually spread over many days, to eliminate all these pauses that occur naturally when we give a presentation on a complex subject. I, for one, couldn’ t care less: The message is delivered even when the speaker takes a few pauses to organize his/her thoughts. As a listener, in any case I need to really concentrate to follow such a talk (dealing with topics that I am not 100% comfortable about), a few “uhm” moments are irrelevant. Thanks again.

  36. Scott Says:

    Dimitri #35: Thanks so much! Made my day.

  37. Scott Says:

    cb #34: It would be great, for purposes of my protocol, to be able to flesh out the assumptions under which the use case you describe makes sense! E.g., a customer that owns a QC, but doesn’t fully trust it, nor does it fully trust any hardware RNG that it’s able to buy or build for itself.

    Both in this case, and in the sort of use case I had in mind (a public randomness beacon), the question of whether you can just keep reusing the same challenge circuit over and over is difficult, and indeed is one of the main open problems I leave. It seems plausible that you could squeeze more and more certified entropy out of the same circuit, at least until an adversary would’ve had enough time to backdoor that circuit, at which point a new circuit would be needed. But any security analysis justifying that conjecture would need to go beyond mine.

  38. Eric Says:

    Dear Scott,

    I meant no offense. I greatly appreciate you taking the time for these talks and if my previous comment caused you any discomfort, I sincerely apologize. Regarding your explanation, I can see that with time being the main issue, there is indeed little one can do.

  39. J.C. Says:

    Scott Lecture #1: A starker objection to your hypothesis is that a Turing machine can’t even generate a single truly random bit, which nature can do via quantum mechanics. Adding a random sampling step to Turing machines is just a version of no true Scotsmanism. No one doubts the Church-Turing thesis for Church’s and Turing’s original motivation of modeling axiomatic formal mathematical systems, but for modeling physical systems it was Born (rule) dead.

  40. Scott Says:

    J.C. #39: I explicitly discussed that in the lecture itself—in the section about tiresome and boring objections to the Church-Turing Thesis, the ones that quibble over definitions and try to get away with theft over honest toil. And I gave two responses:

    1. It’s a theorem that even a probabilistic TM can decide only languages that are computable in the ordinary sense.
    2. As long as there are only finitely many possible outputs of a (halting) probabilistic TM, a deterministic TM can perfectly calculate their probabilities, leaving only the actual spin of the roulette wheel.

    Ultimately, then, I think this objection to the Church-Turing Thesis is in the same class as Wegner and Goldin’s “but a Turing machine isn’t connected to the Internet,” or my parody “but a Turing machine can’t toast bread.” I.e., it faults Turing machines not for any intrinsic lack of computational expressive power, but merely for lack of some external peripheral that you’d need to plug in to the USB port to get the TM to do what you wanted in the physical world. The trouble is that there are infinitely many possible external peripherals you might want, and it’s obviously outside the scope of the theory of computation to try to enumerate all of them.

    PS. If you just wanted a single random bit, and it was allowed to be the same from one run to the next, an obvious solution would be to hardwire it into the TM. 🙂

  41. Ari Says:

    Scott,

    Thanks for the reply. Well I’m happy to hear its possible. I think Sabine said its not.

    http://backreaction.blogspot.com/2017/03/no-we-probably-dont-live-in-computer.html
    https://www.scottaaronson.com/blog/?p=3208

    “I’m not claiming it’s not possible to simulate our observations on a (quantum) computer. My point is that it’s difficult to show it can be done, and nobody presently knows how to do it. Whether you can even do it from qubits is an active area of research, as you say, and that’s leaving aside the question of whether speaking of a programmer is still science or religion. ” Comment #9

    I’d really love you two discuss these things on a video.

  42. gentzen Says:

    Scott, I have a “straightforward” question with respect to the Cauchy-Schwarz inequality as an example for first order reasoning about real numbers: The provably double exponential part of the decision procedure is the quantifier elimination part, but the Cauchy-Schwarz inequality is already quantifier free. I can see why deciding quantifier free formulas is NP-hard, and googling turned up a PSPACE algorithm for deciding them, but I couldn’t find a proof (neither by googling, nor by trying it myself) that deciding them would be PSPACE-complete. Do you know a better “lower bound” for the computational complexity of that problem, or why it is hard to come up with a better lower bound than NP-hard?

    Another question with respect to the first lecture and “Enough with theft over honest toil!”: Is this merely a fight over words? If the “programming language semanticists” would have used new names like “Recursion Thesis” or “Abstract Recursion Procedures” to describe the questions they try to answer, would you have been willing to grant that they are trying to resolve a legitimate puzzle? I guess you won’t have time to read it, but Theses for Computation and Recursion on Concrete and Abstract Structures by Solomon Feferman (2013) tried to do that and other similar things to reduce the “disconnect” which has occured in that discussion.

  43. Scott Says:

    gentzen #42: Yeah, with just one quantifier, you get a slightly mysterious complexity class called ExistsR (existential theory of reals), which sits between NP and PSPACE (indeed it’s in the counting hierarchy I think).

    Yes, I’d be more than happy to grant the programming language semanticists their falsified thesis so long as they they used a different name for it, to make it clear that they’re talking about internal program organization rather than about what machines can exist in the world.

Leave a Reply

Comment Policy: All comments are placed in moderation and reviewed prior to appearing. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, or snide and patronizing tone. Also: comments that link to a paper or article and, in effect, challenge me to respond to it are at severe risk of being left in moderation, as such comments place demands on my time that I can no longer meet. You'll have a much better chance of a response from me if you formulate your own argument here, rather than outsourcing the job to someone else. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.