Quanta of Solace

In Quanta magazine, Kevin Hartnett has a recent article entitled A New Law to Describe Quantum Computing’s Rise? The article discusses “Neven’s Law”—a conjecture, by Hartmut Neven (head of Google’s quantum computing effort), that the number of integrated qubits is now increasing exponentially with time, so that the difficulty of simulating a state-of-the-art QC on a fixed classical computer is increasing doubly exponentially with time. (Jonathan Dowling tells me that he expressed the same thought years ago.)

Near the end, the Quanta piece quotes some UT Austin professor whose surname starts with a bunch of A’s as follows:

“I think the undeniable reality of this progress puts the ball firmly in the court of those who believe scalable quantum computing can’t work. They’re the ones who need to articulate where and why the progress will stop.”

The quote is perfectly accurate, but in context, it might give the impression that I’m endorsing Neven’s Law. In reality, I’m reluctant to fit a polynomial or an exponential or any other curve through a set of numbers that so far hasn’t exceeded about 50. I say only that, regardless of what anyone believes is the ultimate rate of progress in QC, what’s already happened today puts the ball firmly in the skeptics’ court.

Also in Quanta, Anil Ananthaswamy has a new article out on How to Turn a Quantum Computer Into the Ultimate Randomness Generator. This piece covers two schemes for using a quantum computer to generate “certified random bits”—that is, bits you can prove are random to a faraway skeptic. one due to me, the other due to Brakerski et al. The article cites my paper with Lijie Chen, which shows that under suitable computational assumptions, the outputs in my protocol are hard to spoof using a classical computer. The randomness aspect will be addressed in a paper that I’m currently writing; for now, see these slides.

As long as I’m linking to interesting recent Quanta articles, Erica Klarreich has A 53-Year-Old Network Coloring Conjecture is Disproved. Briefly, Hedetniemi’s Conjecture stated that, given any two finite, undirected graphs G and H, the chromatic number of the tensor product G⊗H is just the minimum of the chromatic numbers of G and H themselves. This reasonable-sounding conjecture has now been falsified by Yaroslav Shitov. For more, see also this post by Gil Kalai—who appears here not in his capacity as a quantum computing skeptic.

In interesting math news beyond Quanta magazine, the Berkeley alumni magazine has a piece about the crucial, neglected topic of mathematicians’ love for Hagoromo-brand chalk (hat tip: Peter Woit). I can personally vouch for this. When I moved to UT Austin three years ago, most offices in CS had whiteboards, but I deliberately chose one with a blackboard. I figured that chalk has its problems—it breaks, the dust gets all over—but I could live with them, much more than I could live with the Fundamental Whiteboard Difficulty, of all the available markers always being dry whenever you want to explain anything. With the Hagoromo brand, though, you pretty much get all the benefits of chalk with none of the downsides, so it just strictly dominates whiteboards.

Jan Kulveit asked me to advertise the European Summer Program on Rationality (ESPR), which will take place this August 13-23, and which is aimed at students ages 16-19. I’ve lectured both at ESPR and at a similar summer program that ESPR was modeled after (called SPARC)—and while I was never there as a student, it looked to me like a phenomenal experience. So if you’re a 16-to-19-year-old who reads this blog, please consider applying!

I’m now at the end of my annual family trip to Tel Aviv, returning to the Eastern US tonight, and then on to STOC’2019 at the ACM Federated Computing Research Conference in Phoenix (which I can blog about if anyone wants me to). It was a good trip, although marred by my two-year-old son Daniel falling onto sun-heated metal and suffering a second-degree burn on his leg, and then by the doctor botching the treatment. Fortunately Daniel’s now healing nicely. For future reference, whenever bandaging a burn wound, be sure to apply lots of Vaseline to prevent the bandage from drying out, and also to change the bandage daily. Accept no fancy-sounding substitute.

36 Responses to “Quanta of Solace”

  1. Joshua Zelinsky Says:

    “In reality, I’m reluctant to fit a polynomial or an exponential or any other curve through a set of numbers that so far hasn’t exceeded about 50.”

    Relevant xkcd https://xkcd.com/2048/ .

    More serious comment. If Neven’s Law or something similar to it is accurate, does this remove some of the reason to be interested in boson sampling? If we’re a long time away from quantum supremacy, then boson sampling seems very interesting. But if near-universal quantum computers with a substantial number of qubits are around the corner, then boson sampling seems like less compelling an activity to investigate. Is there some argument for why experimentalists should still work on boson sampling in this context? (I’m asking about experimentalists here; there seems to be more than enough justification for continued interest in the theoretical end.)

  2. Scott Says:

    Joshua #1: Yes, I completely agree that, as a vehicle for demonstrating quantum supremacy, BosonSampling has mostly been superseded (at least for now…) by the progress in superconducting qubits. Note, though, that a lot of the theoretical work that we and others did on BosonSampling has since been ported over (with suitable modifications) to random circuit sampling experiments like the ones that Google is planning.

  3. Job Says:

    Vazirani’s algorithm for certifiable randomness reminds me of Simon’s algorithm.

    Is the verification step analogous to the classical computer asking for a value that’s orthogonal to the secret value s, while the QC is in a superposition of xz and yz, where y = x ⨁ s and z is the random value?

    The whole idea of using a QC to certify randomness is really interesting.

    With this algorithm though it sounds like it’s only checking that the random generator is capable of quantum randomness, rather than asserting that a random value is actually random.

    For example, what’s stopping the service from using a classical computer to pick a random z, and use a QC only for answering the challenge?

    Unless the challenge is unique to z? That would make it difficult to reconstruct the desired state in the QC in order to answer the challenge. I think i understand how it could work.

  4. Laci Says:

    You mention Erica Klarreich and Gil Kalai in the context of the recent disprove of Hedetniemi’s conjecture. Perhaps you could also mention the person who obtained the result.

  5. Scott Says:

    Laci #4: Sorry—fixed!!! Slighting Shitov for this great accomplishment couldn’t have been further from my intentions. I figured that a new post—a nice, “uncontroversial” math and CS one—put up while running to the airport and desperately trying to deal with two wild kids was better than no new posts at all, but this sort of goof shows why the decision is not an obvious one.

  6. Mark S Says:

    (Long-time lurker, rank amateur, first post. Apologies for any clear mistakes).

    Scott,

    For sampling from a random quantum circuit, is it possible to leverage the output so as to amplify the soundness of some BPP or AM protocol, without increasing the complexity of the underlying test, or having to run the protocol twice?

    For example, I can classically uniformly sample $n$ bits from the $2^n$ possible states, and use these $n$ bits treated as a number as a test to see if some polynomial is $0$. My soundness will be some number, say $\sigma$. I can even use the NIST process to generate my $n$ bits, and they will be chosen uniformly at random from $2^n$ states.

    Or I can sample $n$ bits from $n$ qubits output from some random quantum circuit. Here whatever is output from the random quantum circuit will *not* be chosen *uniformly* at random but will obey (I think) the Porter-Thomas distribution. I can plug only these $n$ bits and test to see if the polynomial is $0$. Is the soundness achieved from a random quantum circuit amplified over sampling uniformly at random?

  7. Craig Says:

    Scott said: “They’re the ones who need to articulate where and why the progress will stop.” Scott essentially says:

    At time 1, difficulty of simulating a state-of-the-art QC on a fixed classical computer is 2^(2^1)+1.

    At time 2, difficulty of simulating a state-of-the-art QC on a fixed classical computer is 2^(2^2)+1.

    At time 3, difficulty of simulating a state-of-the-art QC on a fixed classical computer is 2^(2^3)+1.

    At time 4, difficulty of simulating a state-of-the-art QC on a fixed classical computer is 2^(2^4)+1.

    Therefore, at time 5, it is only logical that the difficulty of simulating a state-of-the-art QC on a fixed classical computer must be 2^(2^5)+1.

    And here is my answer:

    2^(2^1)+1 is prime.

    2^(2^2)+1 is prime.

    2^(2^3)+1 is prime.

    2^(2^4)+1 is prime.

    Therefore, 2^(2^5)+1 must also be prime. (This is known not to be true. Look up “Fermat number”.)

    In the same way Fermat’s hypothesis that all numbers of the form 2^(2^n)+1 are prime hit a brick wall, Neven’s law will hit a brick wall.

    And the reason for this is simple: The universe is a digital computer which is incapable of simulating a quantum computer on large scales.

  8. Scott Says:

    Craig #7: Your analogy contains the seeds of its own breakdown.

    Since the primes have zero asymptotic density, and are also “pseudorandom” in many respects, the prior theoretical expectation in that case is that no simple formula, like 22^n+1, could possibly pick out only primes. So when Fermat’s formula does so for the first 4 values, one is momentarily surprised, but then things revert to expectation by n=5.

    With microscopic objects, by contrast, quantum mechanics is the reigning theory of their behavior, and it (with some input from computational complexity theory) predicts that the difficulty of simulating n interacting objects should increase like exp(n). This is an asymptotic prediction, which is different from a prediction about a special property like primality happening to hold at every integer, but in any case the experimental results so far are consistent with the prediction. The only things pointing the opposite direction are the a-priori armchair hypotheses of you and Gil and a few others. A puzzle arises when theory and the data point in opposite directions, not when they point in the same direction. This is why the ball is in the skeptics’ court.

  9. Craig Says:

    Scott,

    Fermat did not have the luxury of knowing that the primes have zero asymptotic density, which is perhaps why he made the mistake.

    Similarly, we simply don’t know what will happen with large scale quantum computers. You can extrapolate one way that Schrodinger’s equation holds always, I can extrapolate it that Schrodinger’s equation is just an approximation. We just don’t know.

    Extrapolation is not science, even though in this case with QC it is driving science.

  10. Scott Says:

    Craig #9: It’s a lot easier to see that the primes have zero asymptotic density than to see that they’re exactly asymptotic to n/ln(n), although I don’t know just how early the question was formulated, nor how early it was resolved. Anyone want to help?

    There’s been a grand total of zero observed violations of quantum mechanics since its discovery in the mid-1920s. You believe it’s only an approximation, but you can’t say what it’s an approximation to, nor where and why the approximation breaks down, nor can you even articulate why the hypothetical better theory would take us back to classical polynomial time. Your belief is not on an equal footing with the successful theory that we have. Sure, you might turn out to be right—but even if so, I can still say with 100% confidence that the ball is on your side’s court.

    Since I don’t have time to continue this exchange, it will be better if I precommit now to not doing so.

  11. Douglas Knight Says:

    Here is something that goes through a lot more than 50 points.

  12. Craig Says:

    Fermat was born 30+ years before Newton. I am guessing that the notion of asymptotic density never even crossed his mind. Limits of functions weren’t even invented/discovered then.

    Also, he probably had a chart of prime numbers, but I’m guessing the chart wasn’t that big. Maybe the first thousand primes at most.

  13. James Warren Says:

    Does anyone know whether, given a circuit of Toffoli and Hadamard gates and a given input, is it hard or easy to find a path (with the choice of which output vector a Hadamard transform takes a given input vector as the branch points) to an output with positive or negative amplitude? In other words, is it easy or hard to sample from the positive and negative amplitude distributions of the output of a quantum circuit before they cancel each other?

  14. Scott Says:

    James #13: The output distribution of a quantum circuit can’t really be thought of in terms of two distributions that cancel, one positive and one negative. For one thing, we’re dealing with complex numbers—but even if we restricted to reals, we’re dealing with amplitudes, and you don’t get probabilities at all until you square them.

    But to be less pedantic: almost every question that’s similar to what you asked is believed to be exponentially hard for classical computers (and in most cases, for quantum computers as well). In many cases, the problems are actually known to be #P-hard.

  15. Joshua Zelinsky Says:

    ” It’s a lot easier to see that the primes have zero asymptotic density than to see that they’re exactly asymptotic to n/ln(n), although I don’t know just how early the question was formulated, nor how early it was resolved. Anyone want to help?”

    Legendre conjectured a statement which implies the prime number theorem in 1798. Shortly before that (I don’t have the exact year), Legendre proved that the primes had density zero. That’s about 150 years after Fermat. That said, if one looks at the sort of things that Fermat, Descartes and others were proving and conjecturing in that time period, it certainly looks like they had an intuition that the primes became more sparse as one got larger. Between those time periods one has Euler doing a lot of work that looks from a modern standpoint like he was trying to get a handle on the density of the primes (see for example his second proof that there are infinitely many primes).

    One of the difficulties is that prior to the modern era, actually conjecturing things explicitly wasn’t that common. Fermat and Descartes are two of the first to do so, so for others one often needs to figure out what they were conjecturing by what sort of things they were actually proving. One fun example related to this is that there’s some disagreement on when the question of whether there are any odd perfect numbers dates from. Descartes mentions it explicitly, but if one looks at earlier writers, they were proving things that certainly look like they were trying to prove special cases, such as proving that no perfect number is a power of a prime.

  16. James Warren Says:

    Scott #14: My understanding is that combinations of Toffoli and Hadamard gates have all the computational power of quantum computing. Complex amplitudes can be represented by an extra qubit. The problem is clearly in NP. Just plug in the path into the quantum circuit and check that the output has the desire coefficient. The ‘coefficient pathfinding’ problem has aspects that make it look like it might be easy (pathfinding is usually easy, most problems have exponentially many correct answers, quantum circuits are reversible) and others that make it look hard (the search space is exponential).

  17. Scott Says:

    James #16: It’s true that Toffoli and Hadamard are a universal set of quantum gates, and it’s also true that you can simulate complex amplitudes using one extra qubit. But the problems that we’re talking about are neither known nor believed to be in NP. It doesn’t suffice just to “check a given path,” because the final amplitudes that we care about are generically sums over exponentially many paths, most of which cancel each other out—the final amplitude then being the tiny residue that’s left over. This is why we generally get #P-hard problems when we ask about specific amplitudes (even just multiplicatively approximating them).

  18. Triple-exponential law with classical simulation of quantum computer – Quantum Bot Says:

    […] formal manipulations with double-exponential Neven’s Law were already widely discussed, yet it is not most weird way […]

  19. James Warren Says:

    Scott #17:

    Ok, for some context why finding just one path is important (rather than summing exponentially many):

    Let’s try to prove the Sipser-Lautemann theorem for BPP and its analog for BQP using quantum circuits. You can use T&H gates for BPP just by summing amplitudes rather than subtracting them. Starting with a BPP probability distribution in NP^#P, (amplify the amplitudes) and add a number of ancillary bits, then use controlled logic gates conditioned on the original random bits to scramble the ancillary bits, flattening the overall probability distribution to the point that each output string has at most one satisfying assignment. So, a #SAT instance is padded with extra clauses to the point that it becomes a SAT instance and BPP \in NP^NP.

    When we try to extend the technique to BQP, we have to deal with the Born rule and the cancellation of amplitudes, problems which manifest themselves as round-off errors. If one state has a probability amplitude 100 and another state has a probability amplitude of 1 it should be 10,000 times more probable. But, if the distributions are flattened, the first state will only be 100 times more probable. This is because the second state should have an amplitude of 0.01, but that amplitudes are rounded to integers. Born rule problems can be dealt with my amplifying amplitudes. The other round-off error problem is when amplitudes that should cancel fail to do so. This can be counteracted by flattening the amplitude distribution less (to a max height and depth of k) and by defining new sub-distributions by fixing the outcomes of Hadamards. However, I am not sure how many such distributions are needed to make the error sufficiently small, but as long as it is not superpolynomial it shouldn’t be a problem.

    If you start with a BQP probability amplitude distribution in NP^GapP, amplify the amplitudes extra to deal with Born rule issues, and use ancillary qubits and controlled logic gates to flatten it, you get a non-zero amplitude distribution like (A \land \lnot B) \lor (\lnot A \land \lnot B) (A is positive coefficients and B is negative coefficients), which is in SAT. This doesn’t work properly because of pixellation issues and instead you can define 2k new distributions, defined by fixing both the output coefficient and the outcomes of Hadamards in the circuit. (You need to use O(log k) Hadamards.) Then define a ‘gadget’ Boolean circuit which takes the square of the difference between the positive and negative amplitudes and simulates the Born rule by concatenating a 4k^2 size alphabet to the output strings and turning on a number of strings equal to that called for by the Born rule. The whole circuit is then written as a SAT formula, so BQP \in NP^NP.

    I’m not sure whether you can reliably flatten the amplitude distribution down to the range in which this construction works.

    (Perhaps the question could be clarified if it could be determined whether circuits with outputs limited to (1) 1 positive coefficient and no negative coefficients, (2) 1 negative coefficient and no positive coefficients, or (3) 1 positive coefficient and 1 negative coefficient which cancel each other out is or isn’t BQP-complete.) If so, we would know that every BQP problem could be flattened and we could use (A \land \lnot B) \lor (\lnot A \land \lnot B) and not worry about Born rule pixellation violations.

    Anyway the motivational punchline: It seems we’ve expressed arbitrary BQP problems as combinations of amplitude distributions. If the pathfinding problem is easy, the distributions are in BPP and, by the closure properties of BPP (and because everything is reversible), shouldn’t we have BPP=BQP?

  20. Scott Says:

    James #19: I confess that I had trouble following your argument—there were a lot of skipped steps, and elision of the differences between amplitude vectors and probability distributions. Switching from the 1-norm to the 2-norm is a really big deal! In any case, everything in your argument appears to relativize, so if you reach the conclusion BPP=BQP, that’s a certificate that something went wrong, since there can be no relativizing proof of such a collapse.

  21. fred Says:

    Is it just me, or someone in a rush to put his name on some QC equivalent of Moore’s law?

    Moore’s law was based on observing the improvements of silicon fabrication techniques.
    And now that we’ve reached the limits of how small a classical transistor can be, it doesn’t really apply in the same way it used to (we’re assembling bigger chips rather than miniaturizing them).

    Classical circuits are scalable because a classical bit is a robust abstraction by definition (until you get to quantum size).

    I may be wrong, but a QC is entirely different from a CC because you have to maintain the integrity of the wave function that’s growing exponentially in complexity (with the number of qubits), and that has to be done across the entire system.

    And calling victory and predicting an exponential growth with just N=50 seems really premature.

    It’s like observing that you can perfectly stack up 20 lego bricks, it’s straight enough across its entirely length that it can stay upright with no issue. Therefore stacking up 40 lego bricks simply requires snapping together two 20-brick-stacks, and stacking up 80 lego bricks simply requires snapping together two 40-brick-stacks, etc… it’s exponential.
    So in theory it shouldn’t be long before we can literally build a lego stack from here to the moon, preserving a perfect vertical alignment on an entire length of millions and millions of blocks.
    But a stack with a few hundred bricks already behaves totally differently from a stack with just 40 bricks, in many ways that were not foreseen – bricks at the bottom could get crushed from the weight of the stack above them, or the column would start to amplify the most minute imperfection, or instantly buckle from the slightest air movement or imperfection in the symmetry of the gravity field, … each requiring entirely new engineering solutions to achieve scaling, with no guarantee of feasibility or economic viability (even though there’s nothing in the laws of physics that prevent building such a stack, in theory).
    It’s like a “leaky abstraction” applied to hardware.

  22. Charles510 Says:

    Is there a relationship between randomness and determinism? If there are truly random numbers or events (such as the radioactive decay of a certain atom within a certain time period), does that fact eliminate the possibility of a Laplacian demon?

    For example, suppose I choose to eat dinner tonight before or after 7:00PM, depending on whether a random-number generator generates an odd or even integer. If it’s an even number, I’ll eat before 7:00, otherwise I’ll eat after 7:00. If the generated number is truly random, and thus unpredictable by the demon, how could the demon predict when I’ll eat dinner?

  23. A passerby Says:

    I think that for this article, the first law of journalism (sometimes generalized to publishing) applies. The law states: “If the title of an article contains a question, the answer to that question is ‘no’.”

  24. fred Says:

    Scott #10
    “There’s been a grand total of zero observed violations of quantum mechanics since its discovery in the mid-1920s.”

    But QM is a theory of the small scale, and we don’t have that many examples of it being applied large scale.

    E.g. General Relativity in itself is pretty accurate (gravitational lensing, the perihelion precession of Mercury, gravitational redshifting of light, the shapiro effect), yet, when looking at things on a large scale (relative to the above observations), it’s clear that new unknown effects come into play (dark matter? dark energy? modified gravity? etc).

  25. fred Says:

    Charles510 #23

    I think that determinism and predictability are two different things.

    Consider a classical closed system where every particle has an effect on every other particle, it’s deterministic, but you can never perfectly simulate (predict) such a deterministic system from the inside – the simulation would have to take itself into account, creating an infinite recursion (your demon would need to simulate itself).

    And as soon as you settle for an imperfect simulation, you start introducing errors and the result becomes probabilistic, and it doesn’t matter if you inject extra “pure” randomness.

  26. Job Says:

    Is it possible to have double-exponential computational growth while maintaining a fixed degree of qubit connectivity?

    For example, with a mesh, would you need to add more than n qubits in each iteration, in order to enable the additional qubit interactions?

    Or is there an architecture where you can keep adding qubits with a fixed number of neighbors and it just works?

  27. Scott Says:

    Charles510 #22:

      Is there a relationship between randomness and determinism?

    Err, that one precludes the other? 🙂

    The more interesting question is whether “randomness” and “determinism” exhaust all the possibilities—my personal inclination is to say no. For more, see my Ghost in the Quantum Turing Machine essay.

  28. Scott Says:

    fred #24: One place where the analogy between QM and GR breaks down is that we already know GR has to be incomplete—for example, because it predicts singularities. There’s no analogous argument (or at least, no universally accepted one) showing that QM has to be incomplete, or that there are ever exceptions to it anywhere. The closest thing I know to such an argument is simply the measurement problem, but it’s philosophically contentious whether to regard that as a problem as all (the proponents of various leading interpretations think they’ve solved it, in different ways, without any change to the formalism of QM).

  29. Scott Says:

    Job #26: Yes, in principle, you can scale a QC arbitrarily even while each qubit talks only to a tiny number of neighbors (3 or even 2). One way to see this is that any gate, between any pair of qubits, can be simulated by a sequence of spatially local gates: just keep swapping qubits until the two you care about are next to each other, then apply the gate you want, then swap back. In practice, of course, this takes time, so the greater you can make the hardware connectivity, the more efficient everything will be (with the savings being up to linear in the number of qubits—i.e., not enough to change an exponential into a polynomial, but still significant).

    Exactly the same connectivity issues arise with classical computer architecture. Have you ever seen the inside of a Cray—just a giant tangle of connections among processors arrayed on the outer perimeter? Or for that matter, the inside of a brain, after which the Cray was modeled? 🙂

  30. A. Hardin Says:

    Scott,

    If a computer(Quantum or not) “could” solve the halting problem, is that similar to solving NP-complete problems in polynomial time? It seems related to CTCs also.

  31. fred Says:

    Scott #28.

    What about an analogy to the fact that even though the Maxwell equations are linear (and this is coming from QM being linear, rihgt?) there is a practical limit to the superposition of EM fields (https://en.wikipedia.org/wiki/Schwinger_limit)?

    In “extreme” modes, the nature of the fabric of time and space (with virtual particles, etc, which aren’t covered by the base theory) starts to creep in to create unexpected side effects and limitations.

    It seems to me that a QC with 10,000,000 qubits is relying on the linearity of the wave function in an extreme regime compared to a CS with 50 qubits.
    It would be really miraculous to get an (double) “exponential” free lunch out of nature.

    On a meta level, if such a free lunch was indeed available, you’d think we would see it around us… after all nature did a pretty good job at packing so much classical computational power in the human brain (in terms of neurons and connections), even compared to our current best silicon chips!

  32. fred Says:

    In my previous post I broke one of my own rules: whatever humans come up with is “natural” (that includes the atomic bomb and “clean” coal).

    So maybe nature’s way to realize quantum effects on a large scale (aka a QC) was first to evolve carbon based classical computers (our brains, and in particular, Scott’s brain)…

  33. fred Says:

    Isn’t the root of CS skepticism vs belief traced back to the intuition whether the wave function is “real” or not?

    Science works by mapping “observables” in the physical world to explicit mathematical objects in the models.
    But what about the reverse?
    I.e. any “implicit” mathematical object needed to simulate the physical world also must map to some (unobservable) entity in the physical world?

    The wave function encoding so much information about reality, it seems that reality has to have some way to account for it (in the two-slit interference experiment, what exactly is accounting for the interference pattern?)

    This must be true, even if it means something really un-intuitive, like the QM reality we observe is the “output” of a simulation, and the wave function we can’t observe is accounted for as the internal of the simulation (at the higher level), and it could even be classically computed, taking exponentially long for the simula-tor, but seeming magical and instantaneous to the simula-ted (us).
    As an example, we could run a gigantic instance of some sort of “game of life” either on a QC or a classical computer (made of transistor or water pipes and valves). The details of this machinery (and its efficiency) wouldn’t matter to “beings” living in that game of life instance, they have no idea what it takes to update/compute/select the successive states of their universe (*).
    And, in a recursive manner, those “beings” would end up trying to recreate their own instance of the game of life, reproducing all the properties they observe, and they would have to do this using a simulation mimicking the machinery of the upper level of their reality (inefficiently, like us building classical computers to simulate QM systems) or somehow using the implicit power of upper level of their reality (like us building a CS).
    Of course, just like us, those beings aren’t really “building” anything… it all unrolls in a deterministic way – its about whether a universe, its rules, and its initial conditions have the power to “groom” complexity (i.e. which states are linked together).

    (*) its not even clear whether there’s the need for a simula-tor, each state of the universe exist in the infinite stack of all the possible states, and the explicit “act” of linking contiguous states (what we call the dynamic evolution) is maybe totally un-necessary, and all the states exist at once (space time block). Maybe “consciousness” is the implicit chaining of related states (for each possible chaining rule, a different universe exist, with different experiences)… well, that’s my own crazy take on Max Tegmark’s idea of reality being purely mathematical.

  34. James Warren Says:

    fred #31, #33:

    “It seems to me that a QC with 10,000,000 qubits is relying on the linearity of the wave function in an extreme regime compared to a CS with 50 qubits.
    It would be really miraculous to get an (double) “exponential” free lunch out of nature.”

    If quantum unitary evolution broke down (just linearity would do this not just nonlinearity), QM would be able to solve #P problems, which most people think take exponential time, so a breakdown of unitary evolution would actually be a boon to QC.

    “The wave function encoding so much information about reality, it seems that reality has to have some way to account for it (in the two-slit interference experiment, what exactly is accounting for the interference pattern?)

    This must be true, even if it means something really un-intuitive, like the QM reality we observe is the “output” of a simulation, and the wave function we can’t observe is accounted for as the internal of the simulation (at the higher level), and it could even be classically computed, taking exponentially long for the simula-tor, but seeming magical and instantaneous to the simula-ted (us).”

    What if (exponential state space of) QM arises from turbulence-like behavior with no smallest length scale? It could even explain the use of complex numbers in QM. If there is a diffusion through length scales induced by unstable modes with imaginary momenta, shouldn’t the diffusion constant be imaginary (or at least complex)? Wouldn’t more complex, simulation-like structures be ruled out unless P=NP?

    Scott #20:

    (1) Is the relativization barrier for BPP=BQP, just the one mentioned in your Algebrization paper with Avi Wigderson? In particular Theorem 5.11 (iv)?

    (2) https://cs.stanford.edu/~trevisan/cs254-10/lecture08.pdf
    Why doesn’t the existence of fpras for #P with an NP oracle imply that BQP \in BPP^NP \in \Sigma_3?

  35. asdf Says:

    https://phys.org/news/2019-06-optimal-quantum-linked-gravity.html

    There’s something claiming some connection between quantum computing and gravity. I haven’t figured out what it’s actually saying. Just putting it here in case it interests anyone.

  36. Scott Says:

    James #34:

      Is the relativization barrier for BPP=BQP, just the one mentioned in your Algebrization paper with Avi Wigderson? In particular Theorem 5.11 (iv)?

    No, algebrization is an extension of the original relativization barrier. The existence of oracles that make BPP=BQP and BPP≠BQP was shown already by Bernstein and Vazirani in 1993.

      Why doesn’t the existence of fpras for #P with an NP oracle imply that BQP \in BPP^NP \in \Sigma_3?

    That’s an excellent question with a technical answer: it’s because the FPRAS is for #P, not for GapP. In other words, it’s for sums of exponentially many nonnegative terms, not sums of both positive and negative terms that can cancel each other out. But the latter (i.e., GapP) is what’s relevant for calculating amplitudes and probabilities in a quantum computation.

    Incidentally, this distinction between #P and GapP is at the core of the hardness arguments that Alex Arkhipov and I made for BosonSampling, and which have since been adapted to other quantum supremacy experiments.