## Happy Thanksgiving Y’All!

While a lot of pain is still ahead, this year I’m thankful that a dark chapter in American history might be finally drawing to a close. I’m thankful that the mRNA vaccines actually work. I’m thankful that my family has remained safe, and I’m thankful for all the essential workers who’ve kept our civilization running.

A few things:

- Friend-of-the-blog Jelani Nelson asked me to advertise an important questionnaire for theoretical computer scientists, about what the future of STOC and FOCS should look like (for example, should they become all virtual?). It only takes 2 or 3 minutes to fill out (I just did).
- Here’s a podcast that I recently did with UT Austin undergraduate Dwarkesh Patel. (As usual, I recommend 2x speed to compensate for my verbal tics.)
- Feel free to use the comments on this post to talk about recent progress in quantum computing or computational complexity! Like, I dunno, a (sub)exponential black-box speedup for the adiabatic algorithm, or anti-concentration for log-depth random quantum circuits, or an improved shadow tomography procedure, or a quantum algorithm for nonlinear differential equations, or a barrier to proving strong 3-party parallel repetition, or equivalence of one-way functions and time-bounded Kolmogorov complexity, or turning any hard-on-average NP problem into one that’s guaranteed to have solutions.
- It’s funny how quantum computing, P vs. NP, and so forth can come to feel like just an utterly mundane day job, not something anyone outside a small circle could possibly want to talk about while the fate of civilization hangs in the balance. Sometimes it takes my readers to remind me that not only are these topics what brought most of you here in the first place, they’re also awesome! So, I’ll mark that down as one more thing to be thankful for.

Comment #1 November 25th, 2020 at 9:27 pm

At 1:01:37 of the podcast video you talk about the possibility of a quantum speedup for edit distance. What do you think about the RAM/QRAM models? They seem to me to be in some nontrivial part accidents of history and I’m not sure whether “it matters” if edit distance is computable in \( o(n^2) \) time in either of them.

Comment #2 November 25th, 2020 at 10:16 pm

nban #1: It’s true that a quantum algorithm for edit distance would mostly be useful if you had either a qRAM, or else two sequences that were defined implicitly via a formula. On the other hand, it’s such a fundamental algorithmic problem that one would love to know the right answer, practical applications or no!

Comment #3 November 25th, 2020 at 10:20 pm

Scott, what are your thoughts/analysis on the recent Google paper on arXiv on the computational penalty of error correction necessitating quantum algorithms with >= quartic speedups in near term for quantum advantage?

Paper link:https://arxiv.org/abs/2011.04149

Comment #4 November 25th, 2020 at 10:29 pm

Is it really true that OWF must exist for practical cryptography? If an encryption scheme were found for which encryption and decryption with the key are in Θ(n^3), where n is the keylength, and the optimal cracking function is in Θ(n^2,796,203), this would not be One Way, in the P and NP sense, but would still be practically secure, right?

Comment #5 November 25th, 2020 at 11:07 pm

Saurabh #3: It’s consistent with what I’ve thought for a long time—that you’d need a pretty enormous QC before a Grover-type speedup can win out against the overhead from fault-tolerance.

Comment #6 November 25th, 2020 at 11:10 pm

MithrilGear #4:

Of courseyou could hope to base crypto on polynomial separations; that idea goes back at least to Merkle puzzles in the 1970s. But then the security of your system would always be at the mercy of polynomial improvements, which experience has taught us are much more common than exponential ones.Comment #7 November 25th, 2020 at 11:47 pm

I suppose this also goes back to something you’d posted earlier about the middle possibilities of P=NP or P!=NP, in which OWF could be proven not to exist, but there are no known, practical attacks on deployed cryptosystems, or

areproven to exist, but all the known schemes aside from one time pads have efficient attacks.The current situation is certainly an interesting one: non-OTP encryption is in the unusual position of being possible in practice, but not known to be possible in theory.

Comment #8 November 25th, 2020 at 11:57 pm

MithrilGear #7: Strictly speaking, we don’t know whether OWFs with polynomial

orexponential security exist, in practiceorin theory.Comment #9 November 26th, 2020 at 2:11 am

Scott #2: I didn’t mean “matter” in a usefulness sense, but in some philosophical sense or something like that. I just don’t see a great reason for why there should be random access to memory, in fact my pop-sci-level knowledge of physics suggests a counter-reason, since if you keep adding more and more memory (as is needed to define asymptotic complexity) then at some point some of the memory will be far away, and communicating with it will have latency. This kind of principle seems to matter for actual current computers, the speed of execution depends heavily on locality of reference (/caching) and not just on theoretical RAM complexity (and of course it depends also on parallelization). So it’s not clear to me that an answer to a question such as “can QRAM do edit distance in \( o(n^2) \)” says anything about the nature of computation.

Comment #10 November 26th, 2020 at 4:20 am

Needless to say, this is the first time in 4 years I’ve felt very thankful at all around Thanksgiving… and even this year that feeling didn’t arise ’til a few weeks back. Not sure what the longer term future holds (in a nation where 70+ million stiiiiill voted for Donald), but at least for the near term maybe we can Make America Respectable Again. And always thankful for the persistent work you do in that regard Scott!

Comment #11 November 26th, 2020 at 7:34 am

Would a TCS researcher risk their academic future merely for suggesting that conferences/journals should accept TeXmacs submissions because TeXmacs provides a superior tool than LaTeX for writing theoretical computer science papers?

Comment #12 November 26th, 2020 at 9:01 am

Amir Michail #11: No, they would not risk their academic future merely for suggesting that. But it would be a massively uphill battle to get people to adopt it—you might as well urge them to write the papers in Esperanto!

Comment #13 November 26th, 2020 at 11:52 am

First and foremost, happy thanksgiving to you and yours, Scott!

Indeed, happy thanksgiving to everyone here: regular posters and lurkers, lefties and righties and everyone in between and orthogonal, etc.!

I’m especially thankful to you for reminding me there is interesting CS literature that’s not explicitly quantum. (I often am like a horse with blinders, I am.). To

To wit, I missed the paper to which you linked regarding “turning any hard-on-average NP problem into one that’s guaranteed to have solutions”, that is this paper:

R. Pass and M. Venkitasubramaniam

“Is it Easier to Prove Theorems that are Guaranteed to be True?”

https://arxiv.org/abs/1906.10837

That’s highly intriguing… even if my personal experience is quite the opposite! I’ve personally proven with great ease plenty of sweeping theorems that aren’t true!! Sadly, it’s only with great effort that I’ve proven few mundane ones that are true. 😉

Comment #14 November 26th, 2020 at 12:40 pm

Bill Kaminsky #13: Indeed, this is one of those unfortunate paper titles where you’ll get completely confused by something that turns out to be irrelevant unless you read through to the paper itself!

Comment #15 November 26th, 2020 at 8:29 pm

> Feel free to use the comments on this post to talk about recent progress in quantum computing or computational complexity!

I’m still hoping that you’d write something about the recently revised MIP*=RE by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, Henry Yuen “https://arxiv.org/abs/2001.04383” . This blog used to be about mathematics and I liked it that way.

Comment #16 November 26th, 2020 at 9:43 pm

Well, P vs. NP is surely relevant to the fate of civilization, via it’s connection to AI. In fact, it’s probably more relevant than the recent US election. Important though the politics may have been in the short-term, I’d say that GPT-3 would be the more significant event this year. Whilst not particularly intelligent in itself, it got people thinking about AI and seems to have led to some insights.

With my concentration sharpened during months of lock-down, an intense period of contemplation enabled me to finally break through to understanding the deep metaphysics of reality and realizing what’s really going on in the universe 😀

I think there must be a ‘generalized complexity theory’ that unifies computational complexity, cog-sci and complex systems. Also, physics, information and math are all objectively real, and some kind of multiverse picture seems to be an inevitable consequence of those assumptions.

Three things are central: *causality* , *complexity* and *compositionality*. We can order (compose) both knowledge and physical events using complexity measures, and what I think we’re really aiming at is to show that time is in some sense equivalent to complexity.

A generalized *data compression* (coding theory) is the key to it all I think, but ‘compression’ is still not quite right as the measure of intelligence. Rather, the notion should be generalized to something like William Whewell’s notion of *consilence*.

So if I were to take a crack at the meaning of life in one word, I’d say it’s *consilience*. There’s a notion of an economy of thought that ‘ties all knowledge together’: Lex Fridman there’s your answer! 😀

https://en.wikipedia.org/wiki/Consilience

There are no fixed truths, it’s all relative to specified contexts I think, and the notion of truth apart from a specified context doesn’t make sense. Neither math nor the laws of physics are fixed, they are still evolving. Math is real, but it’s not in some platonic realm, it’s physical!

The evolution of the universe itself reflects the principle of consilience. At the beginning of time , I think there were 2 time dimensions, that could be called algebraic time and logical time. And what’s happening is that they are merging into one time dimension! So the dimensionality of time is itself variable, and is a non-integer number between 1-2. Algebraic and logical time are merging into topological time, which constitutes both the advance of entropy and the growth of knowledge.

There are different senses of the concept ‘time’: as a coordinate system, as a directionality, as causality, as awareness itself. But I think these are all related, and the goal is to find out exactly how.

The multi-verse is evolving towards some kind of equilibrium, which I think is the balance between simplicity and complexity. It’s not simplicity or complexity alone we value, but rather the balance between the two.

With the end of Trump and the pandemic, civilization surges forward once again ! They were but a flesh wound, a minor inconvenience on the road to post-humanity.

Comment #17 November 28th, 2020 at 5:34 am

Great podcast, very informative! Regarding clusters and centers, here is another example, Budapest: https://www.nature.com/articles/35051162

Comment #18 November 29th, 2020 at 11:17 am

Scott,

Do you ever find yourself vacillating between trying to prove a theorem and trying to disprove it? 🙂 I’d love to hear an anecdote of such, if any readily come to mind!

An easy example would be a problem like graph isomorphism — to adapt a Peter Shor quote, I can easily imagine someone trying to find a BQP/P algorithm for it on Mondays, and then trying to reduce a hard problem to it on Tuesdays 🙂

Comment #19 November 29th, 2020 at 11:37 am

Dr. Lubos Motl posted a paper recently on his site that he claims gives some hope that P=NP(which is of course doubtful even though Dr. Motl puts the chances at even odds). I was wondering if you had read the paper and if you would comment briefly on it. https://arxiv.org/abs/2011.01649

Comment #20 November 29th, 2020 at 2:29 pm

G #18: Yes, that’s so common in research that it’s hard to think of just one example. Here’s one, though: when I worked 4 or 5 years ago on shadow tomography, I alternated between trying to rule it out and trying to show it’s possible (the latter turned out to be right in the end).

Comment #21 November 29th, 2020 at 3:36 pm

Adam H #19: I had a brief look at that paper. Even though it’s only heuristic rather than rigorous, it looks potentially interesting! But as the paper itself explains, and as Lubos apparently doesn’t understand, even assuming the algorithm works exactly as hoped, there are no direct implications for P vs. NP. This is for two reasons. First, the claimed running time appears to be subexponential rather than polynomial (though I couldn’t find the exact claimed bound—could you?). Second, and most importantly, this is for

randominstances of #SAT from a particular distribution. Random instances are very, very often easier than worst-case ones. Indeed, we already know dozens of other examples of distributions over instances of NP-complete or #P-complete problems that are efficiently solvable, but we don’t treat that as evidence for P=NP or P=P^{#P}: it’s just evidence that those instance distributions don’t capture the full difficulty of the problems.Comment #22 November 30th, 2020 at 9:23 am

After reading and thinking over your busy beaver post (yes, I know, I’m slow), I got this question :

If f(n) is computable for every n, and F = lim f(n) as n goes to infinity exists, is F a computable number ?

My intuition say yes (I’m mean, “computable number” and “convergence of a serie” are almost defined the same way : you can get as close as you want by taking n/number of computational steps sufficiently large) but I get weird results when I try to apply it.

Comment #23 November 30th, 2020 at 1:14 pm

Apparently there’s still plenty room for breakthroughs in good old classical computing (alphaFold):

Comment #24 November 30th, 2020 at 2:28 pm

hwold #22:

If f(n) is computable for every n, and F = lim f(n) as n goes to infinity exists, is F a computable number ?

Everyinteger F is “computable,” in the sense that there exists a program to print F (namely, a program that just says “print F”). It’s only functions and infinite sequences that can be uncomputable.Probably you meant to ask: if f is computable, and lim

_{n→∞}f(n) exists, then is the value of the limitprovablein ZFC or some other formal system? Here the answer is no. As a counterexample, let f(n) = 1 if there’s a proof of ZFC’s inconsistency that’s n bits long or shorter, and f(n) = 0 otherwise. Then f is computable and lim_{n→∞}f(n) is either 0 or 1, but if ZFC could decide which, then it would decide its own consistency.Comment #25 November 30th, 2020 at 3:29 pm

Scott #24, F is an integer?

Comment #26 November 30th, 2020 at 5:34 pm

Yeah, n is an integer but f(n) is not (necessarily) so. For example, f(n) may converge to pi, which is computable. Probably I should have written “computable real” instead of “computable number” for clarity. In my mind, f is something like

$$f(n) = \sum_{k=0}^{k=n}\frac{1}{k!}$$

f(n) is computable for every n, it converges to e, which is computable too. It looks like to me that it should be a theorem, given the definition of convergence and computable number, but I have some lingering doubt, namely because of \(\Omega(n, k)\) which is the number of n-states turing machines that halt before k steps divided by the number of n-states turing machines. Obviously \(\Omega\) is computable for every n, k, and taking the limit yields Chaitin constant, which is not computable.

So, either the (1) it’s not a theorem, (2) the limit of that function does not exists (how ?), or (3) I’m missing something else. I’m currently leaning towards (3), but that’s just another way to write that I’m pretty confused.

Comment #27 November 30th, 2020 at 6:23 pm

hwold #22: Sorry, I misunderstood (you should have specified real numbers!). The answer to your question is no. As a counterexample, take any uncomputable real (like Chaitin’s constant); then it can be written as a limit of rationals, and every rational is computable.

Comment #28 November 30th, 2020 at 7:15 pm

I have started writing a draft article for Wikipedia on

VPvsVNPas there isn’t currently one. If anyone would like to contribute to it, it’s on my sandbox here. If you don’t want to help write the article but have topics or links you think should be included, feel free to drop them on the Talk Page.I’m starting with the

PvsNPproblem’s Wikipedia article as a template for the first draft, hence the large amount of text that has nothing to do with Valiant’s Conjecture. The plan is to overwrite as I go. Right now my GPT-3 replication is the main focus of my attention, but I do hope to finish the article eventually.Comment #29 December 1st, 2020 at 3:25 am

@scott#27

Yes, I can see that every real (including uncomputable real) can be written as the limit of a sequence of rationals ; but can it be written as the limit of a computable sequence of rationals ?

Because I can accept that my non-theorem fails (\( \Omega(n, k) \) is pretty much convincing me so), but I’d like to understand why and where. Let’s try to sketch a proof of my non-theorem :

Let f(n) a sequence of computable reals, converging to F = lim f(n). F is a computable real iff we can compute any arbitrary digit of it. Let’s try to compute digit k of F (in base 10). Since the limit exists, then by definition of convergence there is a \(n_k\) such that \(|F-f(n)|\lt 10^{-k-1}\) for every \(n \gt n_k \). Therefore the k-th digit of F is the same as the k-th digit of \( f(n_k) \), which is computable.

Which step of this sketch of proof is invalid ?

Comment #30 December 1st, 2020 at 8:09 am

Scott #21

“we already know dozens of other examples of distributions over instances of NP-complete or #P-complete problems that are efficiently solvable, but we don’t treat that as evidence for P=NP or P=P#P: it’s just evidence that those instance distributions don’t capture the full difficulty of the problems.”An instance that’s easy in one type of NP-complete problem (e.g. subset sum) is also always easy when transformed (in linear time) to any other NP-complete problem (like 3SAT)?

I.e. the “hardness” of instances is “universal”?

Comment #31 December 1st, 2020 at 8:52 am

hwold #29: That’s a different question!

~~Saying that the~~(see below for an important correction to this statement).sequenceof rationals is computable is equivalent to saying that the real number itself isComment #32 December 1st, 2020 at 9:03 am

fred #30: I wasn’t talking about individual instances, but about distributions over instances. It’s possible to map an easy distribution over instances to a hard distribution over instances by hiding its origins—that’s exactly what happens with trapdoor one-way functions, for example—but more commonly the hardness or easiness would be more-or-less preserved.

Comment #33 December 1st, 2020 at 10:26 am

hwold #29: This does not directly answer your question, but it may interest you anyway, since it’s related to the question of whether computable real numbers are “closed” under various operations. Classically, the

Brouwer fixed-point theoremtells us (among other things) that a continuous mapping of the (closed) unit square to itself must have a fixed point. One might ask, does the Brouwer fixed-point theorem remain true if we restrict our attention to computable real numbers? There are a couple of different ways to interpret this question, but perhaps the most natural interpretations lead to the answerno. This surprising fact goes back to Orevkov in 1963. For an expository account, see for example Computable counter-examples to the Brouwer fixed-point theorem by Petrus H. Potgieter.Comment #34 December 1st, 2020 at 12:06 pm

Scott #31: I’m not sure exactly what you’re asserting, but a Specker sequence is a computable, monotonically increasing, bounded sequence of rational numbers whose supremum is not a computable real number. So even if a computable sequence of rational numbers converges to a real number, that real number need not be computable (according to the usual definition of a computable real number).

hwold #29: One gap in your argument is that the function \(k \mapsto n_k\) is not necessarily computable. That is, there does indeed exist a suitable number \(n_k\) for any given \(k\), but there need not exist an algorithm for computing \(n_k\) from \(k\).

Comment #35 December 1st, 2020 at 1:11 pm

Timothy Chow #34: Duhhh, yes, of course and thank you!

I should have said: a computable sequence of rational numbers converges to a real number

at a computable rate, if and only if the real itself is computable.Comment #36 December 2nd, 2020 at 6:38 pm

Timothy #34

Super cool!! So, an example based on the construction from the article is:

1. The number \(r\) is the real number encoding the halting language, for some listing of unary TMs.

2. The sequence \(q_n\) is given as follows: \(q_0\) is \(0.0\overline{00}\). List all TMs and start simulating them while you list them. (ie. tm1 step 1, tm1 step 2, tm2 step 1, tm1 step 3, tm2 step 2, tm3 step 1, etc.). Whenever one of them halts, we take current \(q_i\), add a “1” where that TM was listed in the digits of \(q_i\), and call that \(q_{i+1}\).

And that does it! The limit of \(q_n\) is r, but while the members of \(q_n\) are rational and computable for all \(n\), still \(r\) is uncomputable.

So clearly the key operation here was rearranging the order of populating the bits of our rational approximation … from the infinite sequence of rationals that converge monotonically to \(r\) (an uncomputable sequence) into a related, computable sequence (arranged in order of “halting time of \(i\) + \(i\)”)! Neat!

Scott #20

Very cool, thank you!!

Comment #37 December 3rd, 2020 at 8:52 am

Nice recap on AlphaFold 2 for those interested (how hard is protein folding, how did they do it, why it matters)