Three updates

  1. Hooray, I’m today’s “Featured ACM Member”! Which basically means, yet another interview with me about quantum computing, with questions including what’s most surprised me about the development of QC, and what students should do to get into the field.
  2. I’m proud to announce that An Automated Approach to the Collatz Conjecture, a paper by Emre Yolcu, myself, and Marijn Heule that we started working on over four years ago, is finally available on the arXiv, and will be presented at the 2021 Conference on Automated Deduction. Long story short: no, we didn’t prove Collatz, but we have an approach that can for the first time prove certain Collatz-like statements in a fully automated way, so hopefully that’s interesting! There was also a Quanta article even before our paper had come out (I wasn’t thrilled about the timing).
  3. The legendary Baba Brinkman has a new rap about quantum computing (hat tip to blog commenter YD). Having just watched the music video, I see it as one of the better popularization efforts our field has seen in the past 25 years—more coherent than the average journalistic account and with a much better backbeat. (I do, however, take a more guarded view than Brinkman of the potential applications, especially to e.g. autonomous driving and supply-chain optimization.)

30 Responses to “Three updates”

  1. Bram Cohen Says:

    Hey Scott, some thoughts on the Collatz conjecture, some of which you undoubtedly know but I’d like this to be self-contained so bear with me:

    In Collatz whenever you do a 3x+1 there’s always a division by two immediately afterwards, so a possibly cleaner formulation is to say that the steps are either (3x+1)/2 or x/2. This is sort of like flipping a coin as to whether you win or lose x/2 from your amount.

    If you list the first k up/down events for the numbers from 1 to 2^k each possible one will occur exactly once. This is the ‘nice’ property which makes Collatz so intractable because you can’t pin it down to any particular behavior, everything always happens somewhere. If you take the mean of the kth step of all the numbers from 1 to 2^k it will generally be slightly more than 2^(k-1) because of the 1/2 added at each step, even though the median and geometric mean will be much lower because of the uneven distribution.

    As it happens the alternating 1 2 terminal sequence of everything has equal numbers of up and down steps, but that isn’t true of every number you could possibly add. For example 3x+43 settles into a loop which has more down than up steps and 3x+53 has more up than down steps. Since smaller numbers get into the final loop faster than bigger numbers, the exactly even distributions mentioned earlier mean that the counterparts of those steps have to go larger numbers, which would seem to imply that the mean of the kth step of all the numbers from 1 to 2^k of 3x+43 is on average quite a bit more than 2^(k-1).

    So… maybe the Collatz analogue 3x+43 isn’t true? And maybe 3x+53 is easier to prove?

  2. YD Says:

    Here are a few things that bothered me about that song:

    * The figures with a bunch of Blach spheres side by side. I understand that it’s probably hard to put a \(2^53\)-dimensional Hilbert space in a video, but that picture makes it look like a boring classical analog computer.
    * Applications like ML.
    * He misproununced Mir. Mir doesn’t rhyme with “whir”. It (nearly) rhymes with “near”, “here”, and “sphere” that are literally right there in the same sentence.

  3. Tristram Says:

    A question inspired by the ACM interview: what would it take to factor a 1000-digit number with Shor’s algorithm? A huge number of qubits, much less noise, or both? Could we factor any number at all with 50-100 noisy qubits?

  4. Scott Says:

    Tristram #3: The short answer is a few million qubits, and/or much better gate fidelities, and/or better error-correcting codes that can cope with near-future gate fidelities with much lower overhead.

    With 50-100 noisy qubits, you can’t factor (or even fit) any number that’s not easily factored classically. People sometimes publish papers using adiabatic quantum computing to factor 5- or 6-digit numbers, but that’s just for the rubes — it doesn’t scale, doesn’t nontrivially involve quantum mechanics, and doesn’t demonstrate anything interesting.

  5. Mo Nastri Says:

    > There was also a Quanta article even before our paper had come out (I wasn’t thrilled about the timing)

    Do you think that if you requested them to wait for your paper to come out before publishing their article they would have respected your wish?

  6. James Gallagher Says:

    YD #2

    Just some help with formatting, use the &ltsup&gt tag for superscripts

    so “2&ltsup&gt53” for 253

    or if you really want it bold

    “&ltb&gt2&ltsup&gt53&lt\b&gt” for 253

  7. Raoul Ohio Says:

    First song I ever heard with “double exponential” in it!

  8. Matt s Says:

    In my amateur study of the collatz conjecture, it seems to need some insight into the prime factors of connective numbers.

    Given n=p1*p2*p3…, what are the prime factors of n+1? (Or n-1)

    Proving collatz may not require generally answering that question, but surely it would say something about it to prove termination.
    I don’t see what the connection is between string rewriting and prime factors.

    Am I missing something, or just wrong in thinking collatz boils down to understanding the prime factors of n+1?

  9. Scott Says:

    Mo Nastri #5:

      Do you think that if you requested them to wait for your paper to come out before publishing their article they would have respected your wish?

    What happened was, I talked about our work-in-progress on Collatz because I was under the impression that it was going to form just one small part of a much broader article about automated theorem proving (which might indeed have been the original plan). I wasn’t prepared for Quanta to run a whole feature on a paper that didn’t exist yet—-I learned that it would only when that feature appeared. Yes, I think they would’ve respected my wishes if I’d known enough to explicitly formulate them. Lesson learned for next time!

  10. Alexei Says:

    Great stuff! Looking forward to read.
    I think the following two references might be relevant:

    Doron Zeilberger,
    Teaching the Computer how to Discover(!) and then Prove(!!) (all by Itself(!!!)) Analogs of Collatz’s Notorious 3x+1 Conjecture,
    https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/collatz.html

    Błażewicz, Jacek; Pettorossi, Alberto
    Some properties of binary sequences useful for proving Collatz’s conjecture.
    https://zbmath.org/0547.10006

  11. Sniffnoy Says:

    James Gallagher #6: You appear to have forgotten a number of semicolons…

  12. Mike Stay Says:

    James Gallagher #6 Does escaping the open/close brackets work for all HTML tags, or just a whitelisted set?

    The “sup” example: 2<sup>53</sup>

    Something that really shouldn’t work: <script>alert(1)</script>

  13. Mike Stay Says:

    Mike Stay #12 Oh, I see: the site messes up your HTML in order to sanitize it. 253 works.

  14. Filip Dimitrovski Says:

    Mike #13, James #6

    The preview plugin isn’t completely in sync with the WordPress sanitizing logic, but in this case it didn’t matter.

    The issue with VD #2 is they used TeX with the code 2^53, which according to (La)TeX rules is \( 2^{5}3 \), regardless of this blog. The proper way is to use curly braces, i.e. 2^{53} which should render as \( 2^{53} \).

  15. YD Says:

    Yes, I missed a pair of curly braces. Can we please not derail the comment section because of this? We can talk about quantum rap or Collatz conjecture or something.

    I’m confused about example in section 4.2 and the claim that “Termination of the rewriting system \(\mathcal{F}\) is equivalent to the convergence of \(F\)”. The paper claims that the proof is the same as the proof of Theorem 3.15. But claim 3 of theorem 3.15 does not hold for Farkas’ function and the corresponding rewriting system: there are no rules applicable to \(\mathtt{⊲ff⊳}\), even though \(\mathrm{Val}(\mathtt{⊲ff⊳})=4\ne 1\). In fact, if the two least significant digits are in binary, none of the “interesting” rules \(\mathcal{D}_F\) can ever be used.

    You could probably fix this by using binary-seximal mixed base instead of binary-ternary.

  16. James Gallagher Says:

    I seem to have messed up that html formatting example, lol (it came out ok in the preview)

    as Sniffnoy #11 suggested I needed some semicolons (which won’t display below)

    2<sup>53</sup&gt displays as 253

    <b>2<sup>53</sup>&lt/b&gt displays as 253

    Mike #12

    Not sure

    (@ Scott, if this post displays as garbage when submitted just delete it, sorry!)

  17. James Gallagher Says:

    yup, came out as garbage I give up, lol!

  18. Emre Yolcu Says:

    YD #15:

    That is a good point. The equivalence does hold, although I see now that calling the two proofs *essentially* the same was probably overkill. Claim 3 in the proof of Theorem 3.15 is stronger than necessary for the result it helps us achieve: from a nonconvergent trajectory we construct a nonterminating rewrite sequence. The actual argument for Farkas’ map is roughly the same for the forward direction (from rewrite sequences to trajectories). For the backward direction, take any nonconvergent F-trajectory, and write the first number in the trajectory in ternary. Then if we always perform the rightmost possible rewrite, the resulting rewrite sequence simulates the trajectory since it avoids running into a string that ends with two binary symbols. I hope that makes it clear, otherwise feel free to email me.

  19. Emre Yolcu Says:

    Correction to Emre Yolcu #18: swap “forward” with “backward”.

  20. James Gallagher Says:

    Filip Dimitrovski #14

    Sorry I missed your comment, it explained the issue.

  21. YD Says:

    Emre Yolcu #15: I see. That makes sense.

    Do you think the fact that the rewrite system can “get stuck” like that makes finding a termination proof easier/harder? Or does it not matter?

  22. Vincent Says:

    Can we use this comment section to quible about notational issues in the paper? If not I’ll withdraw the comment. I really like the Colatz paper but I find the notation of Definition 3.9 very confusing because it is not clear what letters can be replaced by numbers in what way.

    After studying the examples I think the source of the confusion is that the double index $${n_i}_{b_i}$$ was intended as $$(n_i)_{(b_i)}$$ while I interpreted initially as $$n_{(i_{(b_i)})}$$ which of course is also only one of many possibilities.

    Nicely enough the latex-environment of this blog complains that I should clarify the braces before it will render my expressions, but may I recommend you do something similar in the paper? Human paper readers and automatic wordpress-latex-functions on blogs are not all that different in this respect.

  23. Vincent Says:

    OK, maybe I should withdraw my comment anyway since this notation issue is of course tiny compared to the overall interestingness of the paper.

  24. Emre Yolcu Says:

    YD #21: Termination is a worst-case statement, so I don’t think it makes any difference. That said, the difficulty of finding an interpretation for some specific instance depends on several factors some of which we only know how to investigate empirically, so it’s hard to say something.

  25. Daniel Tung Says:

    Would love to know more about your view on “quantum states as proofs or advice or unclonable currency”. I am very interested in that too.
    Non-fungible tokens (cryptocurrency that is unique, like signed collectibles) is now all the rage in crytocurrency community. Could there be unique quantum states, such as an unknown quantum state prepared by an encrypted process/algorithm?

  26. Scott Says:

    Daniel Tung #25: This is a big subject, but see e.g. my papers (especially, my two main quantum money papers here and here), or my Barbados lecture notes.

  27. Raoul Ohio Says:

    Check out Scott on Quanta:

    https://www.quantamagazine.org/why-is-quantum-computing-so-hard-to-explain-20210608/

  28. Craig Helfgott Says:

    I read your Collatz paper, it was very interesting. In section 4.4 you discuss the related Syracuse function, and explicitly call out finding other functions with improved derivational complexity. I have some ideas in that direction.

    Consider $$\text{Syr}'(n) = \frac{n}{2^{\lceil k/2 \rceil}}, n \text{ even; } 3n+1, n \text{ odd}$$ with \(k\) defined to be the maximal power of 2 dividing \(n\). I believe we can create a rewriting system implementing Syr’. The standard Collatz function takes $k$ steps to get from an odd number to an odd number; your function S takes (I believe) \(\lfloor\frac{k+1}{2}\rfloor \) steps; Syr’ takes ~\(lg(k)\) steps. Likewise, your rewriting systems take \(k\) steps to commute a trinary digit through a string of \(k\) f’s, and this system would take \(O(lg(k))\).

    Here is the idea behind the rewriting system. Retain your symbols \(\langle,\rangle, f,t,0,1,2\); add symbols [,],u,d,S,H,x,y. We write \(n\) again as before in mixed binary/trinary with f,t,0,1,2, surrounded by \(\langle,\rangle\). However, M sequential f’s will be represented as f[M-1], with M-1 written in binary using bits u,d.

    So instead of “f”, we have “f[]”, and instead of “fffff” we have “f[udd]”. Add a rule \([d \rightarrow [\) to get rid of leading 0s.

    Now we drop the rules \(f0 \rightarrow 0f\), \(f1 \rightarrow 0t\), \(f2 \rightarrow 1t\), \(f\rangle \rightarrow \rangle\).
    We instead have $$]0 \rightarrow 0], u0 \rightarrow 0u, d0 \rightarrow 0d, f[0 \rightarrow 0f[,$$ $$u]1 \rightarrow uSH]0t, d]1 \rightarrow dSH]0t, f[]1 \rightarrow 0t,$$ $$u]2 \rightarrow uSH]1t, d]2 \rightarrow dSH]1t, f[]2 \rightarrow 1t,$$ $$uS \rightarrow dy, dS \rightarrow Su, [S \rightarrow [x, yu \rightarrow uy, yh \rightarrow e, xu \rightarrow x, f[xH] \rightarrow e,$$ $$u]\rangle \rightarrow ]\rangle, d]\rangle \rightarrow ]\rangle, f[]\rangle \rightarrow \rangle.$$
    The characters x,y,S,H implement a binary decrement Turing machine; the presence of the H prevents multiple decrements from happening at once. The last set of rules implements the \(n\) even case of Syr’.

    Now, this already represents a partial speedup. To get the full speedup we would want a binary *adder*, so some \(O(lg(M+N))\) sequence of rewrites could take “f[M]f[N]” to “f[M+N]”, and do so without having issues with substrings of the form “f[M]f[N]0”, “f[M]f[N]1”, “f[M]f[N]2”, “f[M]f[N]f[P]”.

  29. Craig Helfgott Says:

    Actually, we can do one better and drop the symbol f entirely, it is extraneous to the system above.

  30. Craig Helfgott Says:

    And of course we actually want to take “[M][N]” to “[M+N+1]” considering the definition above.

Leave a Reply

Comment Policy: All comments are placed in moderation and reviewed prior to appearing. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, or snide and patronizing tone. Also: comments that link to a paper or article and, in effect, challenge me to respond to it are at severe risk of being left in moderation, as such comments place demands on my time that I can no longer meet. You'll have a much better chance of a response from me if you formulate your own argument here, rather than outsourcing the job to someone else. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.

You can now use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.