Open thread on new quantum supremacy claims
Happy 4th to those in the US!
The group of Chaoyang Lu and Jianwei Pan, based at USTC in China, has been on a serious quantum supremacy tear lately. Recall that last December, USTC announced the achievement of quantum supremacy via Gaussian BosonSampling, with 50-70 detected photons—the second claim of sampling-based quantum supremacy, after Google’s in Fall 2019. However, skeptics then poked holes in the USTC claim, showing how they could spoof the results with a classical computer, basically by reproducing the k-photon correlations for relatively small values of k. Debate over the details continues, but the Chinese group seeks to render the debate largely moot with a new and better Gaussian BosonSampling experiment, with 144 modes and up to 113 detected photons. They say they were able to measure k-photon correlations for k up to about 19, which if true would constitute a serious obstacle to the classical simulation strategies that people discussed for the previous experiment.
In the meantime, though, an overlapping group of authors had put out another paper the day before (!) reporting a sampling-based quantum supremacy experiment using superconducting qubits—extremely similar to what Google did (the same circuit depth and everything), except now with 56 qubits rather than 53.
I confess that I haven’t yet studied either paper in detail—among other reasons, because I’m on vacation with my family at the beach, and because I’m trying to spend what work-time I have on my own projects. But anyone who has read them, please use the comments of this post to discuss! Hopefully I’ll learn something.
To confine myself to some general comments: since Google’s announcement in Fall 2019, I’ve consistently said that sampling-based quantum supremacy is not yet a done deal. I’ve said that quantum supremacy seems important enough to want independent replications, and demonstrations in other hardware platforms like ion traps and photonics, and better gate fidelity, and better classical hardness, and better verification protocols. Most of all, I’ve said that we needed a genuine dialogue between the “quantum supremacists” and the classical skeptics: the former doing experiments and releasing all their data, the latter trying to design efficient classical simulations for those experiments, and so on in an iterative process. Just like in applied cryptography, we’d only have real confidence in a quantum supremacy claim once it had survived at least a few years of attacks by skeptics. So I’m delighted that this is precisely what’s now happening. USTC’s papers are two new volleys in this back-and-forth; we all eagerly await the next volley, whichever side it comes from.
While I’ve been trying for years to move away from the expectation that I blog about each and every QC announcement that someone messages me about, maybe I’ll also say a word about the recent announcement by IBM of a quantum advantage in space complexity (see here for popular article and here for arXiv preprint). There appears to be a nice theoretical result here, about the ability to evaluate any symmetric Boolean function with a single qubit in a branching-program-like model. I’d love to understand that result better. But to answer the question I received, this is another case where, once you know the protocol, you know both that the experiment can be done and exactly what its result will be (namely, the thing predicted by QM). So I think the interest is almost entirely in the protocol itself.
Comment #1 July 4th, 2021 at 7:22 pm
One question I had, not directly related to this specific experiment, is whether the assessment of whether the output was indeed sampling from the right target distribution (either in the classical simulation, or the quantum experiment) used work on quantum distribution testing? I haven’t worked on that particular area, but I know that many people have come up with sample-efficient testing algorithms for exactly this type of question, in a variety of quantum sampling models… and that seems to be a natural application (instead of using ad hoc methods, or more standard hypothesis methods that either only provide asymptotic guarantees or require way more data than necessary in order to provide the desired accuracy).
(This is partly prompted by this paper by Rinott, Shoham, and Kalai (2019), which seems to do exactly that — developtheir own tests and ignore the whole body of work on quantum distribution testing: https://arxiv.org/abs/2008.05177)
Comment #2 July 4th, 2021 at 8:17 pm
Clément Canonne #1: No, all the verification methods that I know about heavily exploit the fact that with quantum supremacy experiments, you already know the target distribution (or could know it with enough computation time), so you only need to test against that. You don’t need something that would work for an arbitrary distribution.
Comment #3 July 4th, 2021 at 8:26 pm
Scott #2: but that’s still relevant (it’s identity testing/one-sample goodness-of-fit), in that case — unless I am missing something… for instance (for the classical case — the quantum case is, based on my understanding, analogous, with different powers but similar gaps/separations): testing from samples if an unknown distribution is equal to a *known* target distribution over N elements, or statistically far (TV distance ε), has sample complexity Θ(√N/ε²). But a naive method would require Ω(N/ε²) samples. So even to check if the samples obtained conform to your known target, using those algorithms allow you to get a lot more bang for your buck: if you can generate q samples from the unknown device, you get much better accuracy/confidence (scaling as N^¼ / √m instead of √(N/m)).
Again, the numbers above are for the classical case, but afaik quantum identity testing has similar improvements (quadratic over the naive approach).
Comment #4 July 4th, 2021 at 8:39 pm
Re: #3: sorry for the typos: also, I used q and m for the same thing, because consistent notation is hard.
Comment #5 July 4th, 2021 at 10:42 pm
Clement, Scott
I haven’t followed the latest papers but I think that unfortunately doing proper distribution testing with respect to a metric such as total variation is not feasible. The problem is that the domain size is exponential and so it’s not feasible to get a number of samples from the quantum experiment that is polynomial in the domain.
Thus the approach people take is to define a proxy such as linear cross entropy and then demonstrate that the quantum experiment is close to the ideal distribution under this proxy. However there are many other distributions that could be close under this proxy, and so this requires an extra computational assumption that none of these distributions could be realized by a classical algorithm.
Comment #6 July 5th, 2021 at 5:42 am
Yes, I was about to say what Boaz said — thanks! Anything of the form Nc won’t work because N is enormous. The test is computational, not statistical.
Comment #7 July 5th, 2021 at 7:07 am
I see — thanks for the clarification! (I guess one could do with polylog(N) assuming some extra, stronger access than sampling — like conditional sampling, or access to approximate evaluation oracle to the pmf, but that’s probably not available/easy to implement, and anyways is quite beyond the original question I had in mind)
Comment #8 July 5th, 2021 at 11:08 am
Thanks, Scott.
We had been performing the “new and better” GBS experiment (Jiuzhang 2.0) before the Jiuzhang 1.0 was published. See the preprint https://arxiv.org/abs/2012.01625, in its end, we said “Note added: An updated experiment with considerably improved SMSS collection
efficiency and higher mode number (144) is being performed at its final stage at the
time of the release of this paper.”
As experimental physicists, we were more excited about the stimulated emission of TMSS (thus scalable, optimal quantum light sources) and the phase programmability, together with other technological advances.
The discussion on the k-photon correlations was thanks to many inspiring comments from distinguished colleagues so we can reveal more physics with the existing data. For Jiuzhang 1.0 with up to 76 detected photons and Jiuzhang 2.0 with up to 113 detected photons, their 200-s sample can show k-photon correlation for k up to 13 (Jiuzhang 1.0) and 19 (Jiuzhang 2.0), respectively. If we collect samples for a longer time (say 100 hours), k can further go up to ~43.
And now in the lab, even “bigger and better” GBS/RCS experiments are in progress.
Comment #9 July 5th, 2021 at 11:11 am
And a minor revision arXiv will appear tomorrow.
Comment #10 July 5th, 2021 at 4:17 pm
Chaoyang #7: Thanks very much for the clarifications! If you have time, a couple quick questions that would help both me and my readers (or I could just, you know, read your paper… 🙂 ):
– Compared to the December 2020 paper, what was the main improvement to the experiment that let you detect higher-order interference? Was it lower photon loss? Something else? (I see that the number of modes is still of the same order as the number of photons, and I guess you’re still using threshold detectors rather than number-resolving detectors?)
– Are you still validating the results using a cross-entropy benchmark? You do the classical calculation for up to, what, like 40 photons or so?
Comment #11 July 5th, 2021 at 8:43 pm
Scott # 10.
1. One technical improvement is the collection efficiency of the quantum light source, from the previous 0.628 to 0.918, by the use of stimulated TMSS. But it was not clear yet whether this contributed to the increase of higher-order interference. It seems to me that the high-order interference is mainly due to photon indistinguishability, which in both cases are quite high, 9.94-0.95. In Jiuzhang 1.0 we already saw 13 orders (limited by the sample size) that were already beyond supercomputers.
2. We have a very new method in this direction too. In order to attract you into reading the full manuscript, I would like to not explain the details but paste two sentences from this part: “For all the tested data, we observe that not only Delta H > 0, but also follows the same increasing trend as in the low-intensity regime. This allows us to infer that the full mode samples in the quantum advantage regime that share the same set-up can be validated with stronger confidence.”
Comment #12 July 6th, 2021 at 4:56 pm
It’s hard to know who to praise more: the technical skills of the quantum experimentalists, or the tenacity and inventiveness of the classical skeptics!
Comment #13 July 8th, 2021 at 10:40 pm
The IBM paper sets an analogy between read only classical bits and qubits used as the conditional part of 2-qubit gates. This seems odd as conditional gates modify the conditional qubit’s state.
Comment #14 July 9th, 2021 at 1:11 pm
Johnny #13: I thought about this too, but I think this isn’t an issue if the control qubits are always prepared in the control basis (like I believe they are in this paper).
Comment #15 July 9th, 2021 at 9:54 pm
A naive question from someone who knows very little about boson sampling: a big discussion point about the initial December 2020 announcement of boson sampling supremacy was the fact that the computer was not programmable (or, in Scott’s words in his blog post, was at most “programmable at the firmware level”, as opposed to on the fly).
I see that by contrast, this improved device is “phase-programmable”. Does that entail full “boson sampling programmability” (either in principle, or as actually demonstrated in this particular experiment)? In his blog post, Scott described full programmability as the ability to “reconfigure the beamsplitters on the fly” – is the phase-programmability described in this paper equivalent to that, or is it a more limited form of programmability?
Comment #16 July 10th, 2021 at 7:44 am
#15 Ted
The transformation matrix in the GBS is determined by both the interferometer and the squeezing parameters and phases of the input squeezers. By changing the input parameters, the underlying matrix can be reconfigured and, therefore, the GBS machine can be programmed to solve different parameters (different matrix). We demonstrated the programmability of the GBS quantum computer by setting 30 different groups of random phases. Change of the phases is done using adjustable electric delay lines which we control with a computer program. This was in principle also doable in the 2020 paper, but we didn’t do it then.
Comment #17 July 10th, 2021 at 7:51 am
#15 Ted
So yes, my personal view is that it is “a more limited form of programmability” but just enough to show a quantum computational advantage. There are major misconceptions in the general audience about the “programmability” actually – many people thought is sort of “C++ language programming”… But all quantum hardware so far is actually “controllability” and “reconfigurability”.
Comment #18 July 13th, 2021 at 12:48 am
Hi Scott, could we use quantum computers to prove math theorems and would that be acceptable to mathematicians? With quantum computers we only know the final answer but we don’t have access to the intermediate steps of what the quantum computer does in the process of the arriving at a final answer. We only have access to the final answer of the calculation, mainly in response to the question: “Is this theorem true?” The quantum computer responds: “Yes, this theorem is true.” or “No, this theorem is false.”
Comment #19 July 13th, 2021 at 11:03 am
Yaqub #18: That’s a very interesting question, which I wrote about in Section 9.2 of Why Philosophers Should Care About Computational Complexity.
You could imagine a quantum computer speeding up the finding of a completely conventional proof—one that, once found, could be checked in the usual way. (E.g., because we simply used Grover’s algorithm to search for some combinatorial gadget, or Shor’s algorithm to factor a huge number.)
But you could also imagine a quantum computation being put forward as, itself, constituting a proof—with the only practical way to verify the proof being to run an equivalent computation on another QC. Now, you could use the recent breakthrough protocols for verifying QCs, like Urmila Mahadev’s, to ameliorate the situation, but even then you’d have to trust some basic laws of physics (and in some cases, cryptographic assumptions) that are presupposed in those verification protocols.
OK, but we’ve sort of been here already! We already have proofs (of the Four-Color Theorem, Kepler’s Conjecture, the Pythagorean Triples Conjecture…) that in practice can’t be verified without doing a massive classical computation. In such cases, what can be done, and has been done, is to reduce the part that needs to be taken on faith to a bare minimum—e.g., have a tiny kernel whose code can be fully read and that verifies the rest of the proof, so that if there were still cheating, then the compiler and/or the hardware themselves would need to be in on the conspiracy. If and when this becomes a live issue, I expect that the analogous things will be done with QC proofs.
Comment #20 July 14th, 2021 at 11:13 am
Scott #19
> In such cases, what can be done, and has been done, is to reduce the part that needs to be taken on faith to a bare minimum—e.g., have a tiny kernel whose code can be fully read and that verifies the rest of the proof,
Or presumably one should be able to formally verify the kernel using a proof assistant, SMT solver or other formal method.
At that point I would be inclined to give such a proof a higher degree of confidence than a complex proof that has only been checked by human mathematicians.
I’m not sure I would have quite the same level of confidence in a quantum proof that couldn’t be classically verified.
Comment #21 July 16th, 2021 at 12:11 am
So all of this is amazing! Does it materially change anything on the non-believer’s side, i.e. would you guess that any of the new technical demonstrations are enough to push well known skeptics out of the skeptic camp?
Comment #22 July 16th, 2021 at 2:00 am
Any comments on this newsbite:
https://www.thehindu.com/sci-tech/technology/iit-bombay-faculty-takes-a-shot-at-a-long-standing-problem-in-computer-science/article35347656.ece
Comment #23 July 17th, 2021 at 5:07 pm
Ian Finn #21: You’d have to ask them!
Comment #24 July 17th, 2021 at 5:08 pm
Rahul #22: I haven’t read it carefully, but it looks like a really nice result!
Comment #25 July 17th, 2021 at 7:19 pm
This seems to be the paper discussed in the article Rahul linked: https://eccc.weizmann.ac.il/report/2021/081/
Comment #26 July 18th, 2021 at 3:59 pm
Thehindu.com webpage has way better content compared to quanta magazine content reporting and is of better readability though the latter has modern webpage formatting, glorified irrelevant graphics and animations almost in its entirety of existence. We do not have to scroll through multiple pages and skim through irrelevant professor and email comments and is direct to the heart of the matter. I am hating machine learning blogs too for their verbosity and leaving potentially useful matter spread around fluff of useless words carrying unnecessary vacuum increasing the non-readability and increasing the uselessness of those blogs
Comment #27 July 18th, 2021 at 5:05 pm
Hey Scott,
if you want to quickly catch up on the current state of Wolfram physics exploration/theories, his latest discussion in Sean Carroll’s podcast is really interesting.
He points out that one of the many things they’re still uncertain about is directly tied to the abilities of QCs.
Comment #28 July 19th, 2021 at 2:49 pm
I’m still not understanding how one can make a QC truly programmable, in the sense that it can implement any arbitrary quantum circuit, as a series of discrete gates, without having to to carefully reconfigure the physical positions and connections between all qubits based on the specific computation being implemented.
The analogy is probably limited, but when one implements the double slit experiment, it’s not enough to just say “the system has two states: the electron goes through one slit or the other, and we get a superposition of those two possibilities”, the slits have to be symmetrical along the electron gun axis, and each path equidistant to the screen (with some precision depending on the wave length of the electron). If we start to offset one of the slit to one side more and more, it seems very likely that the interference pattern will be compromised, and eventually vanish, because each path the electron takes is no longer equivalent, i.e. one path is much longer than the other (and that would make them no longer indistinguishable, a condition for interference).
In a classical computer, the way around this is to divide a complex arbitrary logical circuit/compuation into elementary steps, waiting long enough for each intermediate results to stabilize. The waiting time (defining the clock speed) depending on the longest path of signal propagation in the whole system. Intermediate results need to be locked and stored in transient memory (registers or RAM), until the next stage of the computation can be done. It’s all possible because copying and storing bits is easy and cheap.
But with a QC we can’t ever clone qubits (the only tool is building entanglement amongst more and more qubits), so it’s not clear at all how different arbitrary circuits could be realized using a unique physical configuration… wouldn’t that be a major limiting factor to scalability?
Comment #29 July 19th, 2021 at 3:03 pm
I guess the short version of my question above is : is there an equivalent of the Von Neumann architecture for QCs?
I.e. a general way to organize the basic elements of a QC in order to realize any computation?
It seems that the answer could depend very much on the actual physical implementation of the qubits (spin, superconductor circuits, etc).
Comment #30 July 19th, 2021 at 3:06 pm
fred #27: My guess is that Stephen Wolfram and Jonathan Gorard are slightly too much in the mindset of the many worlds interpretation which cares more about the supposedly pure state of the universe as a whole than about the mostly thermal states locally here on earth. But without thermal states, measurement gets hard.
An irregular participant on the comment discussions here recently wrote a lovely short and clear blog post about this trap of the MWI. (I wish he would fix one small mistake and publish it as a short paper.) Probably you won’t believe me that this trap exists, but the SEP article on MWI also hints that this is a trap:
I do think it is great that Stephen Wolfram and Jonathan Gorard work so well together and enjoy it. From my point of view, this is no fundamental problem, neither for MWI nor for Wolfram physics exploration/theories. But I after listening to the podcast, I also decided to bring this up in another public discussion:
After that lovely blog post appeared, I did buy Sean Carroll’s book “Something deeply hidden” and read it cover to cover. It is a great book, I was extremely positively surprised. But one reason why I read it was to learn Sean’s position with respect to this DeWitt vs Everett issue and especially this specific trap. Unfortunately, … oh well, I guess he was one of the first to try to explain the MWI deeply in a book for general readers … So in chapter “8 Does This Ontological Commitment Make Me Look Fat? A Socratic Dialog on Quantum Puzzles” he lets Alice’s father raise the objection that the permanent splittings of the universe might create such a huge number of worlds that the Hilbert space would no longer have enough space for all of them. Alice counters this objection using existing estimations that the Hilbert-space of the observable universe has roughly 2^(10^122) dimensions. Then Alice uses that there are around 10^88 particles in the observable universe (mainly photons and neutrinos) and assumes generously that each particle splits one million times per second. The current age of the universe is 10^18 seconds, so the total number of splittings is 10^88 x 10^6 x 10^18 = 10^112. So the total number of branches is 2^(10^112), which is much smaller than 2^(10^122).
And at the very end of the book, Sean explains where this 10^122 came from: From the blackhole entropy bound based on the size of the visible universe. This turns Alice’s argument against her farther’s objection into something very controversial and questionable. It is unclear whether the branches that are created by the splitting can really access the whole Hilbert-space of the universe. Most portions of it will only be accessible for high energies and/or high momenta.
Comment #31 July 19th, 2021 at 10:43 pm
Fred #28
Peter Shor posted a video earlier this month where he explains not only his own discovery of Shor’s Algorithm but how he discovered that QCs can implement error correction (theoretically)
The Story of Shor’s Algorithm, Straight From the Source
Comment #32 July 20th, 2021 at 8:16 am
fred #28 and 29
It seems to me that the questions you raise are more technological/engineering related than fundamental and are probably premature given that, as far as I know, there is as yet no clear winner in the competition to find a suitable technology platform for practical QC’s.
Remember that we already have FPGA’s that can be reconfigured very quickly. Moreover I don’t think the Von Neumann architecture should be taken as a fundamental necessity for computation since classical circuits can emulate TM’s with only an O(N log N) increase in gate count.
Comment #33 July 20th, 2021 at 9:02 am
James #31
Thanks!
I actually watched that video a few days ago and really enjoyed it.
I was surprised that his explanation of error correction in QC was way simpler than I thought it would be, basically redundancy using a bunch of codes, the non-obvious thing being that you can both correct for amplitude and phase errors at the same time.
Comment #34 July 20th, 2021 at 10:02 am
gentzen #30
I’ve been thinking about reading Carroll’s book on MWI. But I did watch a lot of his youtube content on the topic (and also from his podcast), and he never convinced me that he could answer any of the two following things:
* how to treat probabilities in MWI, that’s a rabbit hole of confusion and subjective interpretation of words. You get experts arguing about how to interpret thought experiments such as: if I assume I have a perfect coin, and I toss it a million times, the vast majority of observers will measure a frequentist probability that’s near 50/50 and conclude that the coin is indeed perfect. But there’s always gonna be observers who will conclude that the coin is imperfect (e.g. the one guy who sees a million tails in a row). So what probabilities are we talking about here?! Etc.
* how MWI works when the quantum system isn’t a simple superposition of two very distinct states (particle with spin up or down). In that case we explicitly choose a system where the measurement problem is a bit swept under the rug. Mathematically such a system will entangle with its environment in a binary fashion too, which happens to be one of the two basis states we chose (how convenient!), which only works if the basis are “pure” states… which happen to be the ones we can observe, which is a nice example of circular logic! But this doesn’t explain why we never observe a system that’s both spin up or down. Or, in the double slit experiment, MWI doesn’t explain why we observe an electron that’s always at exactly one spot on the screen. Fundamentally MWI doesn’t get rid of the measurement problem (as it claims): how the spreading wave function of an electron that’s traveling in free space ends up with branches (observations) where the electron is at a unique discrete position on the screen. We still have the wave/particle duality puzzle (the transition from wave to particle is the measurement problem!). As Sabine Hossenfelder put it in one of her videos, the Copenhagen interpretation has one wave function collapse to deal with, but the MWI interpretation has as many wave function collapses as there are branches..
Comment #35 July 20th, 2021 at 11:33 am
Gentzen #30
“and assumes generously that each particle splits one million times per second”
When the measurement is on continuous time or space, shouldn’t we use (at the minimum) the Planck units?
When confronted with this, MWI seems to say “the number of branches all depend on the dt or dx you choose”. I get that a human measurement apparatus always has a discrete set of values way less granular than the Planck units (basically crystal atoms as pixels), but that seems very arbitrary/anthropocentric.
Like, when a particle decays randomly (e.g. shrodinger cat), what’s the most granular time resolution we should consider? The Planck time at 10^-44 sec, or the best time measurements ever done (so far) by humans?
Similarly, when two free particles interact, what’s the most granular space resolution? Planck scale at 10^-35 m or the best position measurement ever done (so far) by humans?
All we know is that dp * dx > 10^-34 Js.
Comment #36 July 20th, 2021 at 4:21 pm
fred #29
In the Carroll/Preskill podcast you recently linked, Preskill explains how single and two-qubit gates are implemented – around minute 45-50.
E.g. in the trapped ion architecture, atoms are used as qubits which are controlled using lasers.
The qubit has a 0 or 1 value depending on whether it is in a ground or excited state (and possibly in a superposition of both).
Then two-qubit gates are implemented by leveraging Coulomb interactions between adjacent qubits.
For universal quantum computation, it’s sufficient to have a single two-qubit gate (like the CNOT gate) in addition to single qubit gates (like the H, S and T gates).
Some QC architectures (like the IonQ) have full qubit connectivity, so any two qubits can interact via a two-qubit gate.
In other architectures (e.g.using superconducting qubits), the qubits may be arranged in a grid where only adjacent qubits can interact. In that case, quantum SWAP gates are used to “move” qubits into position (this increases circuit depth by some factor which usually translates into lower overall fidelity).
Relative to a classical machine, it’s still a universal machine. But instead of reading data from a disk into registers to be manipulated by a CPU, you more or less bombard the “quantum disk” with lasers controlled by the QC.
Comment #37 July 20th, 2021 at 5:07 pm
fred: I don’t see why directing the qubits to the right gates would be qualitatively more difficult than with classical computers. Yes, we can’t copy qubits, but we can still move (swap) them, and even conditionally move them to different places depending on a control input. You’ll only be able to use each qubit output as input once, but that’s not a problem, because the quantum computations that you want to interpret will also use each output qubit only once as an input. There are, of course, lots of quantitative problems: technically challenges until you get a computer that can do large enough computations to do something useful, but the same was true in the early ages of electronic computers for classical computations.
Comment #38 July 21st, 2021 at 3:46 am
fred #34: “he never convinced me that he could answer any of the two following things”
I will use Lev Vaidman’s SEP article on MWI and explain how Sean’s answer deviates from it (in cases where it does). “how to treat probabilities in MWI”: In section 6. Objections to the MWI Lev admits (Sean doesn’t explicitly do so):
Sean’s treatment of probabilities corresponds to Lev’s section 4.2 Illusion of Probability from Post-Measurement Uncertainty:
and Lev’s section 4.3 Probability Postulate from Decision Theory
“how MWI works when the quantum system isn’t a simple superposition of two very distinct states (particle with spin up or down)”: On the one hand, Sean evokes pointer states: “The dial of the apparatus includes a pointer that is pointing either to Up or to Down. An apparatus like that doesn’t stay separate from the rest of the world.” On the other hand, he often just falls into the trap and makes an ontological overcommitment (with too many worlds and unnecessary splitting). Lev on the other hand doesn’t even try to unnecessarily avoid Copenhagen like pictures: “A world is the totality of macroscopic objects: stars, cities, people, grains of sand, etc. in a definite classically described state.” Some say that we will never know whether this was really what Everett wanted, or whether it was just the result of Wheeler’s interferences in vain attempts to make Bohr happy. But it lead to a defensible approach that avoided unnecessary overcommitments (which are convenient and often convincing, but misguided in the end). Lev’s section 3.2 The Quantum State of a World explicitly says
And in subsection 3.4 FAPP Lev openly admits:
That subsection already existed in the first version from 2002. The current version from 2018 also references “Wallace, D., 2010a, ‘ Decoherence and Ontology (Or: How I Learned to Stop Worrying and Love FAPP)’”. In section “6 How many worlds?” that paper gives a sweet explanation of actual branching events (and why they are actually quite rare):
I hope that also partly answers your questions from #35: A million time per second per particle is way too much. Most single particles cannot split the world at all, because the result of their interactions would need to be magnified to macroscopic level for a splitting to happen. The million per second on the other hand is OK, that is only a Megaherz, and current computers operate with Gigaherz.
One suggestion for #28: In the sidebar under “Quantum Computing Primers” Scott links to “David Mermin’s essay” (from 2002). A slightly later (2003) and slighlty shorter and sweeter paper is Copenhagen Computation: How I Learned to Stop Worrying and Love Bohr.
Comment #39 July 21st, 2021 at 8:54 am
@Job, @b_jonas
thanks for the answers! Good stuff!
Comment #40 July 21st, 2021 at 10:27 am
gentzen #38
Thank you for so many good references!
I see, it’s true that the “splitting” is in the eye of the beholder. A bit like the concept of entropy: how many “distinct” macrostates exist depends on one’s inability to measure distinct microstates.
Or it’s like asking “at what point the double slit experiment goes from quantum to classical” (when adding a way to sometimes figure which path was taken), it’s a continuous transition (for many particles), not a sudden switch. And (as far as I understand) decoherence is not that there are no longer any quantum interference, it’s that all the different phases involved become so incoherent with one another that the interferences no longer have any meaningful macroscopic patterns in them, it’s like noise. And at that point the “branches” are macroscopically distinct (to a macroscopic “observer”).
When it comes to probabilities, I don’t know, I always think of this: when you shine light on a piece of glass, the reflection index you observe is a direct measurement of the quantum probability for photons to go through. Useful questions about quantum probabilities are always of that nature. The rest seems to be metaphysical musings. It seems to me that no progress can be made between all the Q interpretations (which I think was Scott’s point in his post), so, it’s kinda pointless from a practical point of view.
But I think that the Wolfram approach has some really interesting new takes on many of those questions. I like how branchial space is always based on a space time “slice” of the universe, i.e. depending on observer. And his concept that constructive and destructive interference is when branches converges or diverges for a given observer, and that consciousness is a process that tries to build a coherent story from that branchial space, e.g. a particle at two different separated points is unseen by consciousness because it doesn’t present a coherent picture.
PS:
the latest PBS Space Time video on the topic of MWI was pretty interesting
Comment #41 July 21st, 2021 at 10:42 am
gentzen #38 (sorry, one more thing)
“Some say that we will never know whether this was really what Everett wanted, or whether it was just the result of Wheeler’s interferences in vain attempts to make Bohr happy.”
I once went down the rabbit hole of finding exactly what Everett wrote about some of the MWI objections (at the time he was active), and it seems that his counterpoints (all of them being way over my head) were actually way more subtle and complex than the explanations of the subsequent proponents of MWI.
I’ve heard Sean Carroll say many times that, during Everett’s time, decoherence wasn’t that well understood, and that’s it’s a key ingredient in understanding MWI, but I’m not so sure that those claims are true.
Comment #42 July 21st, 2021 at 4:05 pm
fred #27,
Were you referring to Wolfram’s comment about QC’s potentially not having a speedup relative to classical computers because of how quantum measurements are evaluated according to his framework? The idea being that measurement is more elaborate than we think?
I didn’t really grasp much of the discussion so I didn’t watch the whole thing. Too many new concepts for me, i’ll have to rewatch it.
But the idea that QCs can’t have a speedup because of the hidden mechanisms behind measurement is the kind of general statement that might put QCs beneath classical machines in terms of capabilities.
For example, there are several quantum gate sets which have interference capabilities but can be implemented by a classical machine (in n-dimensional space). Is measurement within a QC using those gate sets still non-viable? If not, what distinguishes one from the other in Wolfram’s model?
I think sometimes we forget that interference is not an exclusively quantum process (only interference within the context of universal computation).
Comment #43 July 22nd, 2021 at 5:05 am
fred #40: “And (as far as I understand) decoherence is not that there are no longer any quantum interference, it’s that all the different phases involved become so incoherent with one another that the interferences no longer have any meaningful macroscopic patterns in them, it’s like noise.”
The way I see it is that the MWI mindset often invokes the picture of the larger Hilbert space. And there are scenarios where this is the most appropriate picture. Fine tuned resonances that interact in complicated ways probably constitute such a scenario, and there should be many more. But invoking the picture of the larger Hilbert space in scenarios which are explicitly characterized by the absence of resonances is overkill, and risks to be misleading.
The important message for me from that lovely blog post was that often all you need is that interference gets suppressed, no need to invoke disjointness, especially if what actually happens is just dephasement. There are scenarios where this insight can make the difference between whether we can simulate them properly with reasonable effort or not:
The picture of the larger Hilbert space is not wrong. Additionally, disjointness is often easy to grasp and allows to analyse and understand basic facts of some situation. It may sometimes be a lie we tell our children, but this is not necessarily a bad thing:
Comment #44 July 23rd, 2021 at 12:39 am
In the IBM paper, is my understanding correct that they compute SLSBn (the 2nd least-significant bit of the hamming weight of an n-bit input register) by effectively rotating the one writable qubit?
That’s how I read their circuit.
If so, suppose that you wanted to compute the nth least-significant bit instead of just the 2nd.
In that case would you need increasingly smaller rotations?
I’m thinking that a classical machine could also do the same using a single qubit if you grant it a sufficiently small rotation gate – which seems fair to me.
Why would the QC be allowed a rotation gate with the desired precision, while the classical machine is stuck with just NAND gates?
Comment #45 August 4th, 2021 at 7:12 pm
Another naive question: in Scott’s blog post from December 3rd, he said that “the difficulty of simulating a ‘generic’ BosonSampling experiment increases roughly like 2^n, where n is the number of detected photons”, which would be roughly 10^34 for 113 detected photons. But the preprint seems to be focusing on a Hilbert space dimension 10^43 (which I assume is 2^m, where m=144 is the number of modes) as the relevant metric of computational difficulty.
Which is the more relevant metric for the difficulty of classically simulating this system – the number of detected photons, or the number of modes? (I can’t figure out how many photons entered the interferometer, which was presumably more than 113 if some photons were lost?)
Comment #46 August 5th, 2021 at 6:53 pm
Scott, I’d be interested to hear your take on this question of whether the number of detected photons or the number of modes is the dominant contribution to the computational complexity of classical simulation. I doubt that anyone else is watching this comment thread at this point (but of course please don’t feel any obligation to respond).
Comment #47 August 5th, 2021 at 7:22 pm
Ted #46: Sorry for the delay. Number of detected photons is by far the more important contribution to computational complexity. The main relevance of the number of modes is just that you want it to be large enough to rule out certain classical simulation algorithms, and enable known hardness arguments to apply.
Comment #48 August 6th, 2021 at 9:03 am
Hi Scott and Shtetl-Optimized readers.
In a collaboration between Bristol/Imperial/HPE, we have been developing new algorithms for exact simulation of Gaussian Boson Sampling. We found that there are huge speedups to be found over existing methods when there are photon collisions, particularly when threshold detectors are used.
The GBS experiments of USTC still run much faster than our algorithms, but we show a large speedup over any predictions previously reported. For example, we predict a ~1-billion speedup compared to estimates in the Science paper from last year.
You can check out our results here: https://arxiv.org/abs/2108.01622
Comment #49 August 8th, 2021 at 5:48 pm
fred #40: Sean Carroll now did a mindscape episode with David Wallace. They discussed some quite deep conceptual physical topics, and I made heavy use of the transcript in order to understand them. And of course they also talked about many-worlds.
David Wallace seems to be good at talking about those topics in words normal people can understand, and still avoids to say wrong or misleading things. For example Sean asked:
And David elaborated: