## Doubts about teapot supremacy: my reply to Richard Borcherds

Richard Borcherds is a British mathematician at Berkeley, who won the 1998 Fields Medal for the proof of the monstrous moonshine conjecture among many other contributions. A couple months ago, Borcherds posted on YouTube a **self-described “rant” about quantum computing**, which was recently making the rounds on Facebook and which I found highly entertaining.

Borcherds points out that the term “quantum supremacy” means only that quantum computers can outperform existing classical computers on *some* benchmark, which can be chosen to show maximum advantage for the quantum computer. He allows that BosonSampling could have some value, for example in calibrating quantum computers or in comparing one quantum computer to another, but he decries the popular conflation of quantum supremacy with the actual construction of a scalable quantum computer able (for example) to run Shor’s algorithm to break RSA.

Borcherds also proposes a “teapot test,” according to which any claim about quantum computers can be dismissed if an analogous claim would hold for a teapot (which he brandishes for the camera). For example, there are many claims to solve practical optimization and machine learning problems by “quantum/classical hybrid algorithms,” wherein a classical computer does most of the work but a quantum computer is somehow involved. Borcherds points out that, at least as things stand in early 2021, in most or all such cases, the classical computer could’ve probably done as well entirely on its own. So then if you put a teapot on top of your classical computer while it ran, you could equally say you used a “classical/teapot hybrid approach.”

Needless to say, Borcherds is correct about all of this. I’ve made similar points on this blog for 15 years, although less Britishly. I’m delighted to have such serious new firepower on the scoffing-at-QC-hype team.

I do, however, have one substantive disagreement. At one point, Borcherds argues that sampling-based quantum supremacy itself fails his teapot test. For consider the computational problem of predicting how many pieces a teapot will break into if it’s dropped on the ground. Clearly, he says, the teapot itself will outperform any simulation running on any existing classical computer at that task, and will therefore achieve “teapot supremacy.” But who cares??

I’m glad that Borcherds has set out, rather crisply, an objection that’s been put to me many times over the past decade. The response is simple: *I don’t believe the teapot really does achieve teapot supremacy on the stated task! At the least, I’d need to be shown why. You can’t just assert it without serious argument.*

If we want to mirror the existing quantum supremacy experiments, then the teapot computational problem, properly formulated, should be: given as input a description of a teapot’s construction, the height from which it’s dropped, etc., *output a sample from the probability distribution* over the number of shards that the teapot will break into when it hits the floor.

If so, though, then clearly a classical computer can easily sample from the same distribution! Why? Because presumably we agree that there’s a negligible probability of more than (say) 1000 shards. So the distribution is characterized by a list of at most 1000 probabilities, which can be estimated empirically (at the cost of a small warehouse of smashed teapots) and thereafter used to generate samples. In the plausible event that the distribution is (say) a Gaussian, it’s even easier: just estimate the mean and variance.

A couple days ago, I was curious what the distribution looked like, so I decided to order some teapots from Amazon and check. Unfortunately, real porcelain teapots are *expensive*, and it seemed vaguely horrific to order dozens (as would be needed to get reasonable data) for the sole purpose of smashing them on my driveway. So I hit on what seemed like a perfect solution: I ordered *toy* teapots, which were much smaller and cheaper. Alas, when my toy “porcelain” teapots arrived yesterday, they turned out (unsurprisingly in retrospect for a children’s toy) to be some sort of plastic or composite material, meaning that they *didn’t* break unless one propelled them downward forcefully. So, while I can report that they tended to break into one or two large pieces along with two or three smaller shards, I found it impossible to get better data. (There’s a reason why I became a *theoretical* computer scientist…)

The good news is that my 4-year-old son had an absolute *blast* smashing toy teapots with me on our driveway, while my 8-year-old daughter was thrilled to take the remaining, unbroken teapots for her dollhouse. I apologize if this fails to defy gender stereotypes.

Anyway, it might be retorted that it’s not good enough to sample from a probability distribution: what’s wanted, rather, is to calculate how many pieces this *specific* teapot will break into, given all the microscopic details of it and its environment. Aha, this brings us to a crucial conceptual point: in order for something to count as an “input” to a computer, *you need to be able to set it freely*. Certainly, at the least, you need to be able to measure and record the input in its entirety, so that someone trying to reproduce your computation on a standard silicon computer would know exactly which computation to do. You don’t get to claim computational supremacy based on a problem with secret inputs: that’s like failing someone on a math test without having fully told them the problems.

Ability to set and know the inputs is *the* key property that’s satisfied by Google’s quantum supremacy experiment, and to a lesser extent by the USTC BosonSampling experiment, but that’s not satisfied at all by the “smash a teapot on the floor” experiment. Or perhaps it’s better to say: influences on a computation that vary uncontrollably and chaotically, like gusts of air hitting the teapot as it falls to the floor, shouldn’t be called “inputs” at all; they’re simply *noise sources*. And what one does with noise sources is to try to estimate their distribution and average over them—but in that case, as I said, there’s no teapot supremacy.

A Facebook friend said to me: that’s well and good, but surely we could change Borcherds’s teapot experiment to address this worry? For example: add a computer-controlled lathe (or even a 3D printer), with which you can build a teapot in an arbitrary shape of your choice. Then consider the problem of sampling from the probability distribution over how many pieces *that* teapot will smash into, when it’s dropped from some standard height onto some standard surface. I replied that this is indeed more interesting—in fact, it already seems more like what engineers do in practice (still, sometimes!) when building wind tunnels, than like a silly reductio ad absurdum of quantum supremacy experiments. On the other hand, *if* you believe the Extended Church-Turing Thesis, then as long as your analog computer is governed by classical physics, it’s presumably inherently limited to an Avogadro’s number type speedup over a standard digital computer, whereas with a quantum computer, you’re limited only by the exponential dimensionality of Hilbert space, which seems more interesting.

Or maybe I’m wrong—in which case, I look forward to the first practical demonstration of teapot supremacy! Just like with quantum supremacy, though, it’s not enough to *assert* it; you need to … put the tea where your mouth is.

**Update:** On the suggestion of Ernest Davis, who I can now reveal as the Facebook friend mentioned above, I just ordered some terra cotta flower pots, which look cheap, easily smashable, and environmentally friendly, and which will hopefully be acceptable substitutes for porcelain teapots in a new experiment. (Not that my main arguments in this post hinge on the results of such an experiment! That’s the power of theory.)

**Another Update:** Some of you might enjoy John Horgan’s *Scientific American* column on reality vs. hype in quantum computing, based on conversations with me and with Terry Rudolph of PsiQuantum.

Comment #1 April 20th, 2021 at 2:28 pm

A remarkably similar discussion took place on Physics Stack Exchange, regarding a falling cup of pudding rather than a teapot: https://physics.stackexchange.com/questions/511067/why-is-googles-quantum-supremacy-experiment-impressive. It seems to me that the two top answers both capture different aspects of your argument in this post.

Comment #2 April 20th, 2021 at 2:38 pm

But if we’re talking about supremacy experiments (which don’t require having an asymptotic improvement), then having an Avagadro’s number-sized speedup may well be enough.

Comment #3 April 20th, 2021 at 2:38 pm

As a side-note to all this, it’s been really wonderful having Borcherds (and many others!) post so many freely-available lectures online, in part incentivised by COVID.

Comment #4 April 20th, 2021 at 2:40 pm

Speaking of gender stereotypes: Our 4 year old son was indeed happy to (try to) smash teapots with his father (I tried too), while our 8 year old daughter was busy watching some new movie on Netflix, but when our son heard that our daughter took a set for her dolls, he immediately insisted that he should have a set for his dolls too!

Comment #5 April 20th, 2021 at 2:40 pm

I agree. The interesting thing about the quantum sampling experiments is not that you cannot simulate them classically, but rather that you CAN simulate them at an exponentially growing cost. This shows that this is not about difficulty of cloning a physical object, such as a wax seal or ink stain, but rather about simulating a controlled and replicable experiment.

Note that if you could really manufacture identical teapots with unpredictable randomness, you could use that for crypto. For example, maybe we would share two sets of teapots, and break them to generate some shared randomness. Let’s just say that if I was a VC, a “teapot crypto startup” would not be high on my list of companies to invest in..

Comment #6 April 20th, 2021 at 2:46 pm

In terms of computational problems most easily solved by direct experimentation though, might I propose the protein folding problem as an example.

Comment #7 April 20th, 2021 at 2:48 pm

P.s. As I wrote on Twitter, I also disagree with Borcherds’s description of problems such as simulating quantum systems and traffic optimization as “useless”. Indeed, they may well be more useful than integer factoring!

Comment #8 April 20th, 2021 at 2:59 pm

dankane #2:

But if we’re talking about supremacy experiments (which don’t require having an asymptotic improvement), then having an Avagadro’s number-sized speedup may well be enough.

Yes, if you had an Avogadro’s-number-sized speedup that was actually for real, that would certainly be of interest! I confess that for me, though, a quantum speedup (even if not yet scalable) is of

greaterinterest—precisely because ithintsat an asymptotic speedup, and seems difficult to explain if such a speedup isn’t possible (a point that, notably, even Gil Kalai agrees about).Comment #9 April 20th, 2021 at 3:02 pm

In the multitude of arguments I’ve had over the years concerning various forms of pseudo-reductionism I have called this the Stone Soup Fallacy, after the old French fable.

The fallacy of the “Alchemist’s Dodge” or the “(Philosophers’) Stone Soup”.

It goes a bit like this:

You can make a hearty soup out of nothing but stones and hot water …

if you add a pinch of salt …

if you add a dash of pepper …

if you add a few potatoes …

if you add a carrot or two …

if you add a hock of ham …

if you add …

And so it goes.

The moral of this particular telling of the story

is that a reduction is not a proper reduction if

it does not reduce something to something lower.

Comment #10 April 20th, 2021 at 3:15 pm

This is an excellent blog post, but it would be even better if you bought and then smashed $250,000 worth of teapots.

Comment #11 April 20th, 2021 at 3:45 pm

I’ve been calling your objection the *encodability requirement*. You want the problem to be specified as data. You want to be able to write the problem down. For reasons perhaps best exemplified by how SI definitions have changed over time, individual physical objects are not ideal for defining this sort of thing.

Encodability is perhaps not strictly necessary. The real underlying property you want to guarantee is that two competing systems are instantiating the same problem, and their accuracy is being measured against the same standard. But encodability sure is convenient for achieving that!

As you note, an encodable version of the teapot problem would be to have a parameterized physics model of a teapot, and to ask *according to the model* how many pieces the modelled teapot will break into. Crucially, this means differences between the teapot and the model are now errors in the teapot instead of errors in the model. Having made this conceptual change, the question becomes whether or not you can find a model with enough detail to be difficult to solve for a computer and to accurately match the behavior of the real teapot, but not so much detail that you can’t initializes its parameters by scanning a teapot or 3d print a teapot matching specified parameters. I suspect the answer is that you can’t find a model meeting both criteria.

If someone found such a model, demonstrated that teapot experiments agreed with the model on the cases they can actually run, and that the model became intractable to solve in some regime where teapots were still constructible and we had reason to think they were still matching the model, I would admit they demonstrated beyond-classical teapot computational powers.

Comment #12 April 20th, 2021 at 4:08 pm

I would add that “specifying the input” is also important for gauging purely algorithmic advances – cf. Ewin Tang’s dequantization of recommendation. Unlike shards of the teapot, this requirement can’t be swept under the rug.

Comment #13 April 20th, 2021 at 4:25 pm

From my (not very deep) perusal of the Google “quantum supremacy” claim, it seemed to me that it had the problem that it’s not reproducible.

Suppose that, rather incredibly, I actually want to solve the exact problem that their quantum computer solved (despite it apparently being a useless curiosity). I phone them up, say “I want to order ten of those quantum computers! We need one for each of our factories. When can you deliver?”.

The answer, as I understand it, is “never”. The exact parameters of the problem are specific to their particular hardware. (Amazing, isn’t it, they their hardware just happened deliver parameters that are the ones our factories need!) Unfortunately, if they construct a new quantum computer of the same design, it will only work for the problem with different parameters, determined by the exact properties of that particular physical computer. It will be of no use in our factories.

At least that’s my not-very-informed reading. Maybe those who know more can say whether their computer is actually reproducible.

Comment #14 April 20th, 2021 at 4:52 pm

Maybe I’m missing something, but the ‘classical computer’ you use to get a sample from the teapot distribution seems like the perfect example of a classical computer/teapot-hybrid. Not the kind that Borcherds is talking about, but rather the opposite kind, where the teapots do all the real work and the classical computer is just sitting there to make things look fancy and modern. I mean what was the computer gonna do if the small warehouse of teapots hadn’t provided it with the distribution first?

So as it stands it seems to me that instead of explaining why the proposed teapot supremacy experiment does not show that the teapot is superior to a classical computer you merely explained why the proposed teapot supremacy experiment does not show that the teapot is superior to a teapot-computer hybrid.

But that is not very surprising. Nobody would expect a teapot to be superior to a clever teapot-computer-hybrid, much like noone would expect a quantum computer to outperform a quantum/classical-hybrid or a classical computer to outperform a computer-human hybrid etc.

I guess I misinterpret the argument, but how?

Comment #15 April 20th, 2021 at 5:13 pm

What fun! I predict this will be one the kids’ long lasting memories of childhood :). Good thing they have this blog post to use, to explain the context to their grand kids :). Now I better read this essay again to understand that better myself!

Comment #16 April 20th, 2021 at 5:21 pm

Vincent #14: In Sipser’s computability theory textbook, there’s a wonderful problem that goes like—

Let f:N→N be the constant 1 function if God exists, or the constant 0 function if God does not exist. Is f computable?

The solution is yes, f is computable, and it doesn’t depend at all on your beliefs about God. The proof is that

everyconstant function f is computable.I.e., whatever effort it takes to determine whether God exists or not, that’s “precomputation” and is utterly irrelevant to the question of computability. Either answer, once found, could simply be “hardwired” into your program.

(But of course, not

allfunctions are computable! E.g., the halting problem isn’t!)If you understand that, then you should also understand the point I made in this post. Whatever smashing of teapots it takes to determine the probabilities to put into your program, that’s precomputation, which is utterly irrelevant to the question of the computational complexity of generating samples from the distribution

considered as a distribution.The latter question is settled by the distribution having small support.(But not

allfixed probability distributions over n-bit strings are efficiently samplable by a classical computer! BosonSampling distributions probably aren’t!)Comment #17 April 20th, 2021 at 5:26 pm

Craig Gidney #11: That was beautifully put! Except that for me, encodability is not just a practical convenience, but more like a prerequisite to considering a problem a properly

computationalproblem at all, as opposed to any of the other kinds of problems you might encounter in life (e.g., the data you need being behind lock and key, protected by armed goons, etc.)Comment #18 April 20th, 2021 at 5:49 pm

If your recent prize money is supposed to encourage further research, then that’s an argument for using some of it to buy some of the actual porcelain teapots and smash them.

(Please do not actually do this.)

Comment #19 April 20th, 2021 at 6:08 pm

Radford Neal #13: I believe they’ve gotten substantially better at calibration just within the last year or two, and could now reliably produce multiple chips with the exact same gates. But I’ll ask Sergio Boixo to clarify.

Comment #20 April 20th, 2021 at 6:41 pm

Scott #16

> Let f:N→N be the constant 1 function if God exists, or the constant 0 function if God does not exist. Is f computable?

> The solution is yes, f is computable, and it doesn’t depend at all on your beliefs about God. The proof is that every constant function f is computable.

That problem doesn’t depend on your beliefs about God, but it does depend on your having

somebelief about God. In particular, it requires that you believe either that God exists or not. If you don’t believe either of those things, then f is not defined and the question is ill-posed. The definition of f depends on a particular instance of the law of excluded middle, and not a trivial one. Maybe you think it’s obvious that everything either exists or not, but once you head down that road it’s not long before you wind up in “perfect beings exist necessarily” cuckoo-land.(I don’t know whether this affects your analogy.)

Comment #21 April 20th, 2021 at 6:51 pm

Nick #20: Fine, then let’s say that f is “computable to whatever extent it’s well-defined.” There isn’t any sense in which it’s uncomputable. Likewise, in my analogy, the probability distribution over number of teapot shards is efficiently samplable to whatever extent it’s well-defined. And this is fundamentally different from the distributions we see in quantum supremacy experiments, which are well-defined but

not(we don’t think) efficiently samplable.Comment #22 April 20th, 2021 at 7:12 pm

Speaking of QC hype, I really wish you would call out Xanadu as much as you called out d-wave back in the day.

They emply very smart people, but their claims are outrageous, and it this case they are using something you invented to raise millions based on lies.

As someone who lives in Canada, it’s sad to see them raising money from the government and from Canadian VCs.

Comment #23 April 20th, 2021 at 7:57 pm

Radford Neal #13: Please note that we have published multiple experiments with standard quantum gates on Sycamore chips, such as arXiv:2101.08870 and arXiv:2102.06132.

Perhaps your objection is that in the main text we used “arbitrary unitaries”, with parameters measured in calibration before the experiment, and slightly different for every pair of qubits. Using “Sycamore unitaries” with standard parameters, the same for every pair of qubits, would have lowered the fidelity by a factor of 1/2 (see Fig. S30 in the supplement arXiv:1910.11333). This is OK.

Comment #24 April 20th, 2021 at 8:16 pm

But I can drop a randomly constructed teapot

made of 53 atomsin a vacuum and have the system mathematically well defined.That’s just what Google did.

Comment #25 April 20th, 2021 at 8:20 pm

James Gallagher #24: It’s not what they did—superconducting qubits are made of the states of billions of electrons, they’re not in vacuum, and it’s only the list of

operationsapplied to the qubits that’s randomly constructed—but OK! 🙂Comment #26 April 20th, 2021 at 8:36 pm

Scott #25

Well you argued that the macroscopic teapot falling in air was too complicated to mathematically define, which is a bit unfair when Google only demonstrated a 53 qubit system

Also the fact the qubits are not in vacuum is not relevant, they aren’t falling through the air.

Anyway, just a bit of fun, I enjoyed the post, and hadn’t heard of these arguments and discussions.

I’m impressed you ordered a load of cheap crap on Amazon and then smashed it all up – right on bruv!!!

Comment #27 April 20th, 2021 at 8:53 pm

Sergio Boixo #23:

Thanks for the explanation! It’s good to know that things work well enough that one can have a system with reproducible properties.

Comment #28 April 20th, 2021 at 10:27 pm

Scott #0,

So clear that it makes you feel like you always knew it. Thanks! But, wait, couldn’t you apply this line to the Church-Turing-Deutsch principle as well as the terra cotta flower pots? Like, even if this principle was morally true, *any physical process* has no input then it’s not computable. Or is *any* the input?

Comment #29 April 20th, 2021 at 10:47 pm

ghost learner #28: Sorry, I don’t understand your question.

Comment #30 April 20th, 2021 at 10:54 pm

To try to rescue Vincent #14’s point: if the problem is changed from “sample the number of shards for the teapot”, then you can make the smashing experiment be part of precomputation. But if the problem is “sample the number of shards for this ceramic object X”, where X is an input, then the smashing experiment becomes a subroutine rather than precomputation. In that case it seems like a ceramic-object-classical-computer hybrid has an advantage over just the classical computer (at least if you accept that the smashing experiment is really necessary, and that you couldn’t do as well by some computation without it).

Comment #31 April 20th, 2021 at 11:00 pm

I am not certain if the video was worth commenting by a computer scientist since it is unlikely Borcherds knows half as much as Professor Aaronson. I think the analogy is a bit misleading. It would be as much as Professor Scott Aaronson shines light on the moon.

T={ teapots on the condition when dropped from a height of 1m at a particular room in Berkeley on a particular data and time shatters into 1000 pieces}.

The correct question should be if the teapot in hand and in kitchen are both in T. I think teapot can answer for only its own membership.

However when we define a limited model of quantum computation and constructivize it in the lab it answers for membership for a particular Language (however narrow the definition of the language may be the quantum computer can realize membership for potentially infinite members in the language depending on the model scalability and definition). As Professor Barak puts it nicely “The interesting thing about the quantum sampling experiments is not that you cannot simulate them classically, but rather that you CAN simulate them at an exponentially growing cost.”.

Comment #32 April 21st, 2021 at 12:39 am

I think something like an optical device is a better analogy for the supremacy experiment (though the teapot is fine for traffic optimization, etc).

The claim would be that a “quantum optical device” can see things at a much higher resolution than a classical (i.e. lesser-quantum) device.

That seems inherently true, and definitely biased towards quantum devices, but it would still be a test of one of the device’s most useful applications.

You might end up with the teapot scenario if the optical device can only be pointed at a very specific scene (i.e. not sufficiently programmable), or requires such deliberate calibration that it misrepresents the device’s actual capabilities.

But that’s a problem with the experiment, not the test.

As a curiosity, in the Sycamore experiment, the “optical device” would be sampling individual pixels from an image large enough to require a few thousand laptop screens in width and height (IIRC).

That’s its own problem, once you zoom in that much it really dilutes the results.

And then the discussion shifts to whether a non-optical classical device can fool the “human eye” at that zoom level.

Comment #33 April 21st, 2021 at 2:31 am

Dmitri #30: Yes, see the paragraph in the original post beginning “A Facebook friend said to me…”

Comment #34 April 21st, 2021 at 5:42 am

>then as long as your analog computer is governed by classical physics, it’s presumably inherently limited to an Avogadro’s number type speedup over a standard digital computer, whereas with a quantum computer, you’re limited only by the exponential dimensionality of Hilbert space

Apologies if this is dumb, but I was under the impression that if we think of an `asymptotic version’ of the Google experiment, for large sizes the noise comes to dominate and so the output is easy to simulate (just simulate the noise). Is this incorrect? Otherwise it seems like the Sycamore experiment is basically also a constant-factor speedup (albeit with an astronomical constant, depending on physical gate fidelity).

(I agree with the point that the teapot problem is not mathematically well-posed – has anyone tried to come up with a family of initial conditions for the Navier-Stokes equations that are hard to simulate?)

Comment #35 April 21st, 2021 at 8:09 am

DavidM #34: You might describe a speedup from a NISQ device as a “constant-factor speedup on the shores of an exponential speedup.” I.e., a constant-factor speedup that was

possibleonly because of the asymptotic exponentiality of Hilbert space, an exponentiality that we hope to exploit just as soon as we’re past the fault-tolerance threshold. Whereas an Avogadro’s-number speedup from a classical analog system, even assuming it’s for real, is … not really on the shores of anything else. It’s up against the wall of Avogadro’s number.Comment #36 April 21st, 2021 at 8:31 am

I made the very same argument using “dog turds” instead of “teapots” on this very blog a couple years ago, and Scott quickly dismissed it in a few words. I’m glad this has now been properly addressed… 😛

Comment #37 April 21st, 2021 at 8:50 am

As a curiosity, there has been tremendous progress in recent years in the classical simulation of physics:

Comment #38 April 21st, 2021 at 8:56 am

To me, this “rant” is exactly…a rant! A rant from a person (besides the inaccuracies he might have said) who had enough from all the hype about QC.

A bit of topic, Scott: the threshold theorem says that if the error per gate is a small enough constant, then QC is scalable (with error correction). Has anybody treated a case where the error per gate is not constant increasing as a function of the number of gates? Depending on the exact physical implementation of the gates in a QC, this might be possible. This requires some physics instead of TCS I guess.

Comment #39 April 21st, 2021 at 9:03 am

LK2 #38: Well, what’s the

limitof the error rate as the number of gates goes to infinity? If it’s less than the fault-tolerance threshold, then fault-tolerance is still possible; if not, then not. That didn’t require any physics, just logic. 😀Comment #40 April 21st, 2021 at 9:39 am

Incidentally, Ted #1: Thanks so much for that link (which I just had a chance to read this morning)! The answers there do indeed render most of this post superfluous. I especially liked that, rather than quibble over semantics (the exact definition of “quantum supremacy,” etc etc) as so many like to do with this topic, the questioner just zeroed in immediately on what’s really at issue here: namely, why was Google’s experiment

impressive? And then got direct and accurate answers to that question.Comment #41 April 21st, 2021 at 10:28 am

Scott #39: the logic you outline is obvious, but my question remains and I believe it is tied to how physically the qbits/gates are realised. I was just wondering if there any studies suggesting that the error rate for a qbit (or gate) grows with the number of qubits (gates) in the QC.

Of course I agree with you that if this growth does not surpass the threshold everything is fine but I still think that TCS cannot really answer this.

Comment #42 April 21st, 2021 at 10:43 am

Something bothers me about your arguments against teapot supremacy. Take the following different example. I claim myRSA supremacy. Whatever pubkey(x) you give me, I can fast compute x=privkey(pubkey(x)), but no one else can. This is a useless supremacy, because I cannot do anything else. Also, in this case, of course we know that there is a classical algorithm that does the job, but why couldn’t BosonSampling be the same? I know that even BQP=P is possible as well, but what I’m saying is that at the moment I don’t see why Boson Sampling is any better than myRSA.

Comment #43 April 21st, 2021 at 10:45 am

LK2 #41: Yes, of course, whether you

canbuild a large system where the qubits remain below the fault-tolerance threshold is ultimately a question for engineering (if you’re an optimist) or physics (if you’re a pessimist) 🙂 , but in any case one where TCS plays only a side role.And yes, experimentalists have found that as you scale to larger numbers of qubits, you get more cross-talk between the qubits, and other issues that tend to push the error rate higher. That’s why some people worried that even a quantum supremacy experiment, of the sort Google did two years ago, wouldn’t be achievable, but of course it ultimately was.

The view of most of us is that cross-talk between qubits and so forth are all “temporary” problems (where “temporary,” alas, could mean decades in this context)—you just need to get over this finite-sized hump to where fault-tolerance starts working, and then you get a massive wind pushing you in the opposite direction, as more qubits and gates let you implement better error-correcting codes and thereby push the effective error rate

lowerrather than higher. Gil Kalai, of course, takes the opposite view, that the “finite-sized hump” is actually an infinite mountain that can never be scaled even in principle. Aren’t you interested to find out who’s right? 🙂Comment #44 April 21st, 2021 at 10:54 am

Scott #42: thanks for the clear answer! Of course I’m interested in finding what will happen with all this. I’m absolutely not skeptical about QC (otherwise I would be skeptical about QM, and I am not) but the physical implementation of a QC is a really exciting challenge. I admit that I am slightly (only slightly) edging towards Gil’s side, but I would be VERY happy if the thing will work!

Thanks again.

Comment #45 April 21st, 2021 at 10:56 am

domotorp #42: Once again, “myRSA supremacy” depends on secret information, known only to you. Quantum supremacy doesn’t. The classical computer gets exactly the same information as the quantum computer about which distribution to sample from; it just can’t sample as quickly. That’s the difference.

Comment #46 April 21st, 2021 at 11:09 am

Scott #44: I don’t think it is a secret information, anyone can compute privkey from pubkey (in exponential time). To give another example, whenever someone discovers a new algorithm for any specific task, that person has supremacy over the whole world in that task until they share their method, don’t they?

Comment #47 April 21st, 2021 at 11:26 am

More generally, I think that any function might be a trapdoor function until it is proved to be complete for some complexity class. Even NP might have a trapdoor (a poly alg), just the more problems we reduce to something, the less likely it is, I suppose, that it has a trapdoor. But until more things are reduced to BosonSampling, the only evidence we have against it not having a trapdoor, is our intuition and physical experience.

Comment #48 April 21st, 2021 at 11:29 am

How exactly do qubits scale in performance? The pop-sci answer is that n qubits have the power of 2^n classical bits, but digging around suggests that that’s incorrect, and refers only to the number of states that can be represented. I found some of Scott’s writing that suggests that there isn’t a global performance increase at all, only the ability to use faster quantum algorithms for certain tasks. Unfortunately, having spent quite a bit of time Googling, I wasn’t able to find anything definitive, though it sounds like there may be a speedup somewhat akin to 2^n on highly parallel tasks, and less of one/none at all on purely serial ones.

In particular, how do qubits scale for artificial neural nets? To the best of my knowledge there’s been very little work in that area so far, but if one could use near-future qc systems to get an exponential speedup in (highly parallel) neural nets, that would be fascinating.

Comment #49 April 21st, 2021 at 12:32 pm

domotorp #46: Alright, if you want to call being able to invert a trapdoor function “private-key supremacy,” I’m not going to argue. But it’s “supremacy” based on having explicitly generated and therefore not needing to compute some particular private key, rather than based on having

physically engineered a new type of computer.You and I both know that this is semantics of the most fruitless and boring kind, akin to arguing that the Wright brothers weren’t the first to achieve powered flight because someone before them threw a motor, or attached a small cargo to a pigeon.

Comment #50 April 21st, 2021 at 12:43 pm

David #48: n qubits are a fundamentally new kind of resource—one that for almost all purposes, is at least as powerful as n classical bits and at most as powerful as ~2

^{n}classical bits. As for where it falls between those two extremes—well, that’s the subject matter of this entire field! I think the trouble is just that you were looking for the sort of answer that would fit into a blog comment, when an honest answer, even just covering what we already know, is the size of a textbook or a semester-long course. (Exactly like if you’d asked the question, “what sorts of problems have efficientclassicalalgorithms?”)Yes, problems that admit large quantum speedups (especially exponential speedups) are extremely special in nature. Yes, those problems need to be parallelizable, but that’s only a necessary condition, not a sufficient one. For example, computing the parity of n-bit string is extremely parallelizable, but it admits no asymptotic quantum speedup (only a speedup by a constant factor of 2).

If you want to learn more, my Scientific American article and undergrad lecture notes (feel free to skip around) are two possible places to start.

Comment #51 April 21st, 2021 at 1:49 pm

Scott #43

“Aren’t you interested to find out who’s right?”Do you think that overcoming that finite-size hump for scalable QC could end up requiring a collaborative effort like the LHC (9B$ budget), the ITER project for nuclear fusion (50B$), or the ISS(100B$)?

Or do you think that this will be achieved by private companies, as different instances of proprietary tech? (it’s usually the case for computing tech)

Comment #52 April 21st, 2021 at 2:54 pm

I am very glad that the comment of Ted #1 was highlighted. I think the comment illuminates an interesting part of the sociology science, which is that a high profile individual gets attention for making an observation previously made by others who are less well known (or even annonymous, as is the case in this situation). No-one is a villain here. Richard Borcherds may well have come up with his teapot observation independently, or had subconciously absorbed the Stack Exchange conversation and forgotten about it in his concious mind. And it is only human nature to give preferential attention to prominent members of the hierarchy. But the commonplace nature of the event somehow makes me feel sad.

Comment #53 April 21st, 2021 at 3:09 pm

LK2 #44:

Disagree on the logic.

I, for example, am much more skeptical about QC than QM. While QM certainly has some mysteries and conceivably will be superseded by a deeper theory, there is zero doubt that useful calculations can be done, and in fact are all the time. QC is another matter, and it is debatable if any useful calculation has been done.

Comment #54 April 21st, 2021 at 3:15 pm

I look forward to your joint paper with Noah about smashing the flower pots.

Comment #55 April 21st, 2021 at 3:22 pm

David #48, Scott #50: I will impertinently remark that of course we have a slightly stronger* upper bound of ~n classical bits and ~2^n

time😛David #48: If you want a one-sentence summary, roughly speaking for general `unstructured’ parallelisable problems you get a speedup of sqrt(T); for an exponential speedup you want your problem to have something to do with finding the period of a function (e.g. for factoring, the function x|->a^x mod N=pq has period which divides (p-1)(q-1)). [of course this is all so far as we know]

*(maybe)

Comment #56 April 21st, 2021 at 3:28 pm

I have a tenuous personal link with Richard Borcherds, being one of the people he beat in the 1977 British Mathematical Olympiad. I came third, and he came joint first. But in fact he really came first, because after waltzing through all six problems, he had time to improve on one of them. You had to prove that given a certain number of points inside a cube, there must exist points closer than a certain distance. It wasn’t the most difficult of the questions. But Richard proved, using his tiny-cubes, in his spare time so to speak, that this distance could be reduced from 13 to 10 (IIRC). After this came out in the post-mortem, I was in awe of him. Richard, if you’re reading this: you have a fan!

So I kind of followed his career, on a once-every-few-years basis, and was immensely gratified when he got his Fields Medal: hey, I can stand losing to that!

And I was delighted by your “less Britishly”. The video proves that he is as British as the day he was born! But your tale of failing to smash the toy teapots is what really made my day.

Comment #57 April 21st, 2021 at 3:36 pm

Scott #35: Right, though I guess the `skeptic’ position is that nature abhors quantum error-correction and so when you try it the almighty will smite you down, or line up the errors against you, or something, so maybe they would say there’s still a wall but it’s invisible for now…

Which makes me wonder: is there a way to make precise the property that a particular quantum circuit doesn’t do error correction? In particular might we dare to dream of a theorem that the behaviour (with constant gate fidelity) of any non-error-correcting circuit can be simulated in polynomial time?

Comment #58 April 21st, 2021 at 3:38 pm

TonyK #56: Great story!

Comment #59 April 21st, 2021 at 3:41 pm

asdf #54: Noah? Do you mean Daniel (my 4-year-old)? I’ll see if he wants to coauthor… 🙂

Comment #60 April 21st, 2021 at 3:49 pm

Des Smith #52: To complete the picture, I’ve had nearly-isomorphic conversations many times over the last decade, on this blog and elsewhere, preceding that Physics StackExchange thread as well (though I found the answers there unusually crisp). With the Borcherds video, though, what prompted me to write this post was not merely that Borcherds is justly distinguished, but also that

(1) he was exceedingly clear in the video—which made it easy to pinpoint my disagreement,

(2) the video kept showing up in my Facebook feed, and

(3) the video was

entertaining.Comment #61 April 21st, 2021 at 4:28 pm

There is a point that it is still not clear to me. One of the arguments against the teapot experiment is that it is not mathematically well defined while the (say) Sycamore experiment is.

However, that swept under the rug how hard was to mathematically define the Sycamore experiment in the first place.

You may have had to individually measure and calibrate to the point of operation each individual gate in order to make your problem well defined.

Similarly if you were building your teapot in an advanced fabrication facility you can take long time fabricating and measuring the individual “chunks” of the teapot down to the atomic level. That way you would have had a well defined “mathematical” model of the teapot to start with.

Comment #62 April 21st, 2021 at 5:04 pm

bertgoz #61: The way this point was explained on Physics StackExchange was so clear that I’ll just quote it:

The big difference between the quantum supremacy experiment and your pudding experiment is that the quantum supremacy experiment solved an unambiguous, well-posed mathematical problem. While people sometimes describe the computational task as “simulating the physical Sycamore computer”, that’s not right. The actual task was calculating the output of an abstract quantum logical circuit, of which the Sycamore computer was an approximate physical instantiation. The difference is subtle but crucial. From a computational perspective, the math came first and the physics came second.

Crucially, the quantum supremacy problem was mathematically well-specified, and so it could be checked on a classical computer. The parallel classical computation wasn’t just there to provide a time benchmark, but also – crucially – to check the quantum computation for accuracy.

There’s no such “slower but equivalent” computation to the pudding experiment…

Comment #63 April 21st, 2021 at 5:11 pm

DavidM #57:

Which makes me wonder: is there a way to make precise the property that a particular quantum circuit doesn’t do error correction? In particular might we dare to dream of a theorem that the behaviour (with constant gate fidelity) of any non-error-correcting circuit can be simulated in polynomial time?

Oh man, you have

no ideahow many times that exact question has come up.There are results that do parts of what you’re asking for—especially the 2018 work of Gao and Duan, which shows (roughly speaking) that a

randomnoisy quantum circuit can be efficiently simulated classically in the asymptotic regime. (See also the related, much earlier work of Aharonov and Ben-Or.)But no, I don’t know of any

generalway to formalize the concept of “not doing error-correction,” and I’m skeptical that that’s possible—since once you have a universal quantum computer with intermediate measurements, error-correction is simply one of the things you can do with it!Comment #64 April 21st, 2021 at 5:12 pm

Scott #62, sorry Scott but I might be quite obtuse but I still don’t get it.

If I have built beforehand an atomistic model of the teapot and I capture to a high enough degree of precision the conditions of the experiments (height, wind, etc), what’s preventing me to then run the experiment in a supercomputer? In which I will have to painfully calculate the (quantum?) interactions of all the atoms as the teapot smashed the floor

Comment #65 April 21st, 2021 at 5:55 pm

bertgoz #64: Because you

don’t, in fact, have such an atomistic model of a teapot. And even if you did, an actual teapot wouldn’t conform to the model, because of machining errors and so forth, and therefore it wouldn’t help you in deterministically predicting the chaotic behavior of the atomistic model. At most, the actual teapot could tell youin generalwhat teapots like that one (but not atomistically identical) tend to do when smashed. But a simulation running on a classical computer could’ve told you the same! And that’s why you almost certainly don’t have teapot supremacy.By contrast, there’s an ideal mathematical model of a 53-qubit, ~1000-gate quantum circuit, and Google’s Sycamore chip managed to produce samples consistent with that model (though if you changed even a single one of the 1000 gates, you no longer had consistency). This is no longer a hypothesis or a supposition but a demonstrated fact. And as far as we know, spoofing the samples with a classical computer would take much longer and/or require a large number of cores.

In both cases, the teapot and the Sycamore chip,

you don’t start with a physical system and then challenge people to simulate it. Instead, you start with a mathematically well-defined problem and then challenge various physical systems—classical computers, quantum computers, teapots—to solve the problem. A failure is not a failure of the mathematical problem to capture all aspects of the physical system, but the exact opposite: a failure of the physical system to capture the problem.I’ve explained the above point over and over and over and over for the past decade, but it hardly ever seems to get through. Now might be a good time for someone else to take a stab at it!

Comment #66 April 22nd, 2021 at 4:28 am

Hi Scott, you beat me in time on this 🙂 I had written a rebuttal of the teapot experiment for our internal discussion group at work but you did it first: http://www.gagliardoni.net/#quantum_teapot_apr_2021

Comment #67 April 22nd, 2021 at 5:12 am

Scott whoops! Yes, I meant Daniel. I don’t know why I remembered his name as Noah. I thought you had written that in your post but I must have gotten confused. Sorry!

TonyK #56 “The video proves that he [R. Borcherds] is as British as the day he was born!”: According to Wikipedia, he was born in South Africa…. 😉

Comment #68 April 22nd, 2021 at 5:26 am

Smashing teapots on your driveway seems like quite a Texan way to do science. You come across as a calm person so the image this paints is pretty funny 😀

Comment #69 April 22nd, 2021 at 5:51 am

“If so, though, then clearly a classical computer can easily sample from the same distribution! Why? Because presumably we agree that there’s a negligible probability of more than (say) 1000 shards. ”

I think even if the number of shards goes to infinity, it should be possible to derive a good approximation of a continuous probability distribution which e.g. describes the size distribution of the shards. And similarly for shape. Of course that might be a lot of work and difficult to do.

Also: To compare Boson sampling to something else, the ‘something else’ should not be chaotic. Meaning it should not depend exponentially on the initial conditions. Which could be the case for the teapot example. Than of course we can probably not find a good probability distribution for the shard, but than it is also not a well defined problem and quite different from Boson sampling.

That said: I do like teapot test as a way to judge quantum computing proposals.

Comment #70 April 22nd, 2021 at 6:42 am

The teapot supremacy argument falls apart as soon as you ask yourself what you actually use a computer for. Of course you can view any system in nature as simulating itself, but if we can’t control it then it’s useless to us. Try doing something useful with a quantum system and you’ll figure out the difference right quick. A lattice of coupled spins, isn’t that a quantum computer? Each spin is a qubit and the Hamiltonian governing their interaction encodes some particular problem. Well if you can control the Hamiltonian then yes congrats, it actually is a quantum computer. A piece of iron in your pocket isn’t though, even if it’s hard to simulate.

Comment #71 April 22nd, 2021 at 7:19 am

“as long as your analog computer is governed by classical physics, it’s presumably inherently limited to an Avogadro’s number type speedup over a standard digital computer”

How come?

Comment #72 April 22nd, 2021 at 8:29 am

Ingrid #71: Because, as long as we’ve assumed many-body quantum effects aren’t important, a simulation on a digital computer could always just go atom by atom or molecule by molecule.

Comment #73 April 22nd, 2021 at 7:01 pm

Computational problem: drop a ball bearing into a 1 gallon bucket filled with n millilitres of water. Count the number of ripples that emanate after m seconds, starting from the moment the bearing hits the water.

Input: m and n.

I claim this problem is “mathematically well posed,” intractable on a classical computer (given the hardness of simulating fluids), but easily done on a “hybrid device” consisting of a classical computer, a real bucket, and a digital camera.

Scott, what is your objection?

Comment #74 April 22nd, 2021 at 7:36 pm

ASimilarQuestion #73: The response is

exactlythe same as the response to Borcherds’s example, that you have to look at the whole probability distribution over possible ripples (averaging over the microscopic perturbations), and when you do it’s unlikely that you’ll still see supremacy over classical computers for the sampling task. Like, did absolutely nothing get across here? People are really just going to keep repeating the original thing I was responding to, the thing I directly answered in this very post, as if the post didn’t exist?Comment #75 April 22nd, 2021 at 8:48 pm

Scott #74: In a stunning new result Scott has discovered low-complexity patterns in misunderstandings of quantum supremacy, suggesting that “bozo sampling” might be polynomial-time after all.

Comment #76 April 22nd, 2021 at 8:58 pm

Scott:74 why are you changing the problem statement? There’s no probability. It’s a counting problem, not a sampling problem. Also, why do you think this is easy. The complexity of simulating ODEs, let alone PDEs, to specified accuracy grows exponentially in the length of the simulation, which is one of the inputs of the problem statement.

I think you keep getting this type of question because it hasn’t been answered clearly enough.

Comment #77 April 22nd, 2021 at 9:25 pm

ASimilarQuestion #76: It’s not “changing the problem statement” because there wasn’t a clear problem statement to begin with. For the n

^{th}time, you can’t just point at a physical system and say, “simulate whatever that does.” That’s not a “problem statement” in any sense comprehensible to theoretical computer science, because you haven’t specified the inputas data.It’s simply not a fair fight: a classical computer presumably won’t be able to simulate the exact pattern of ripples, butnot for reasons of computational difficulty,only because it doesn’t know the detailed microscopic state of your system—a different, orthogonal issue.Indeed, if you tried to specify the state of your system down to the last molecule, then long before you were done the system would’ve been chaotically buffeted to a different state, meaning that the system itself would no longer faithfully implement the mathematical description that you’d written down for it!

Meanwhile, if you give the classical computer only crude, macroscopic data about the system, then the most you can reasonably ask the computer to do, with the data it has, is to sample from a probability distribution over

possibleevolutions—but that the classical computer most likelycando, and very efficiently as well, provided the dynamics of the system are classical.None of this is similar to what happens in sampling-based quantum supremacy experiments, where the classical computer is given a complete description of the relevant degrees of freedom, good enough to simulate the quantum system in exponential time, and the only reason why it can’t simulate it in

polynomialtime is the exponential dimensionality of the Hilbert space.All this, again, is right there in the original post!

Comment #78 April 22nd, 2021 at 10:06 pm

Scott #77: In your future back and forth with others on this issue, why not just demand that the inputs and outputs to their favorite experiment/device be as categorically stated as those to the integer-factoring problem/algorithm?

The factoring problem and (at least one classical) algorithm is way more familiar than the problem that Sycamore dealt with.

If you’re then asked whether Sycamore’s problem/output was categorically stated, then you can state your case for why it was.

Comment #79 April 23rd, 2021 at 5:08 am

@ Scott OP, 65: I think to make the explanation clearer, your statement about the distribution of shards needs to be strengthened: under the classical mechanics that (presumably) governs teapot smashing,

allfluctuations in the number of shards are due to noise of various sorts, whether those are gusts of wind or fluctuations in the construction of your assembly of teapots. There’s no reason to conceptually separate one from the other: both the wind conditions at the experimental site and the make of the teapot are initial conditions that must be perfectly controlled for the problem to make sense as a problem.I know that you know this (and it’s in the original post), but I’m just highlighting it because it might help Asimilarquestion, since these ‘objections’ mostly work by sneaking in classical uncertainty in the initial conditions.

@ Itai 75: you win the internet for today for ‘bozo sampling’.

Comment #80 April 23rd, 2021 at 5:32 am

Perhaps another way of phrasing it is, that dropping a teapot is not a single algorithm for computing how many pieces that the teapot will break into when dropped. Instead, it is a series of constantly altering algorithms, that change with every wobble of the pot in your hand, with every wind current that blows on it, with every thermal jiggle of the component molecules. In a sense, then when you drop it, you are yourself sampling from the probability distribution of all possible states of the pot, and thus all possible algorithms.

Comment #81 April 23rd, 2021 at 6:44 am

ASimilarQuestion: One way to realize the difference is that the Google paper contained extensive analysis showing that their circuit actually solves the computational task they defined. That already shows that it wasn’t a tautology of the type “simulate yourself”

You don’t really define a replicable experiment since every bucket at every time will be a different experiment. This is similar to me declaring “Boaz supremacy” since I can compute the function that on input a time t outputs the sentence I will utter at that time, and a classical computer can’t predict that. In fact, a classical computer by itself with a hardwired pseudorandom function can achieve “supremacy “ if that’s the standard.

Comment #82 April 23rd, 2021 at 6:53 am

@Asimilarquestion: the crucial difference between ball bearing advantage and quantum advantage is that you can only make ball bearing advantage work if you put the classical computer severely on the back foot by introducing an information asymmetry. You do this by posing a problem whose answer depends on something the ball bearing system ‘knows’ and that the classical computer doesn’t (namely the exact internal configuration of the bearing + water system).

You can remove that asymmetry in two ways, either by giving the classical computer the exact same full information about the experimental system, or by modifying the problem to be about the total possible distribution of configurations, so that the problem no longer depends on the precise internal configuration of the experimental system system at all. In both of these cases, it is generally believed that the classical computer would have no problems keeping up with the experimental system in solving those problems.

In contrast, quantum advantage experiments (uniquely, so far) are physical systems in which there is symmetry in what the classical system ‘knows’ and what the quantum system ‘knows’, but where quantum still beats classical.

Comment #83 April 23rd, 2021 at 7:01 am

Here is another example to show that this is not really about “simulating yourself” and not even about programmability but about full specification and replication:

I can build a device that had a private signature key SK (corresponding to a verification key VK) hardwired in it with some tamper resistance and that computes the function that on input a message x outputs the signature of x with respect to SK. Another way to say it is that it computes the function

F_VK(x) = sig where sig is (let’s assume unique) signature for x under VK

This device, which is a classical computer, is “programmable” in the sense of computing a function on exponentially many inputs and other people can’t compute this function in poly time. However this does not show that classical computers are superior to classical computers since the difficulty here is not about simulating the device but about knowing its state or cloning another copy of it. In contrast Sycamore has a spec and it is possible in principle to build another Sycamore. While it won’t be identical, it should achieve similar fidelity with the ideal mathematical noiseless quantum circuit it simulates.

Comment #84 April 23rd, 2021 at 8:31 am

@Scott 77. Fine. To remove the hidden information, replace “water” in my problem statement with an ideal fluid with fixed initial conditions. The counting problem can now be perfectly specified to a classical computer. Further, known algorithms for it’s solution will still scale exponentially in the length of the simulation m given the hardness of solving the Navier-Stokes equations to a prescribed accuracy.

I suppose the objection now is that it’s impossible to construct ideal fluids in the lab, so we could never demonstrate “bucket supremacy” on this counting problem. I’m not really sure I agree with that. Further, we could pose similar questions using other idealized dynamical systems that are easier to build, but extremely hard to simulate (like a double pendulum). If the correct answer depends on the accuracy of the simulation, then classical algorithms—i.e., numerical integration methods—will still scale exponentially.

Comment #85 April 23rd, 2021 at 8:38 am

If the teapot problem can be described as a math problem with some very large but finite number of parameters, why is the resulting computation always polynomial, ie why doesn’t it ever become exponential? Scott talked about Avogadro number vs Hilbert space, why doesn’t Hilbert space enter into the teapot math problem as long as the number of inputs are finite.

Comment #86 April 23rd, 2021 at 9:02 am

Forget teapots, let’s just model a coin toss done by a human.

There are different problems than can be solved.

We know that a real life “perfect” coin has 50/50 probability. Perfect in the sense that the coin is unbiased and that there’s enough chaos in the system (which is the case when a human is tossing the coin).

We’re never talking about reproducing a given coin toss experiment instance (because no one can throw the same coin in exactly the same way either) but capturing the asymptotic behavior of the system, when the coin is tossed over and over.

We don’t need to model the coin down to the atomic level, plus all the molecules in the room, plus all the molecules in the human tosser (including his/her entire brain), … in order to capture the fact that a coin toss gives a 50/50 probability. Modeling all that stuff would buy us nothing. So modeling such a coin toss on a computer is trivial, you just need to generate a random number between 0.0 and 1.0 and if it’s smaller than 0.5, it’s head, and it’s tail otherwise.

But this is all different from saying “here’s a real life coin, which we know is biased in some way (say, head comes up only 40% of the time, but you’re not given that actual distribution)… I give you this coin, and you’re allowed to measure it anyway you want (imagine that the coin’s density is non-uniform), and then you have to model the tossing of this coin such that your simulation will give the same biased distribution as the real coin”

Comment #87 April 23rd, 2021 at 9:54 am

How about if you change the teapot problem so that the output space is exponentially large. For example, outputting the sizes of the shards (in terms of their mass)? Then, if the teapots have the property that the number of shards is usually between 100 and 200, the space that is being sampled over becomes too large to employ your method of estimating a probability vector. Could you smash a warehouse of teapots to obtain data from which you could sample with a computer?

Comment #88 April 23rd, 2021 at 10:00 am

A non-expert’s bozo index (a number between 1 and infinity) might be defined as the number of repeated, varied and always erudite explanations from our patient host that were needed before the point finally sank in. My bozo index is large, but finite (allowing for occasional relapses). It was helpful to discuss a problem I knew a little bit about, protein folding.

Comment #89 April 23rd, 2021 at 10:41 am

Dropping a teapot is probably not very chaotic, at least dropping a plastic water bottle a few times seems to produce a rather stable trajectory. So I would estimate, that all you have to do is to keep track of a hundred or so oscillation modes for the shattering of an teapot, and that seems to be doable in the t=sqrt(2*s/g) ~ 1/2 s it takes the teapot to actually drop. So it appears that a teapot is as a matter of fact not a good teapot simulator.

Comment #90 April 23rd, 2021 at 11:22 am

Use Factoring #78:

In your future back and forth with others on this issue, why not just demand that the inputs and outputs to their favorite experiment/device be as categorically stated as those to the integer-factoring problem/algorithm?

Just for the obvious reason that we don’t yet

havequantum computers that achieve supremacy (or anything close to it) for factoring, or other problems in NP ∩ BQP. Once that exists, we’ll presumably no longer need this discussion. But the existing quantum supremacy experiments really do require a careful conceptual discussion of what we mean by a “problem,” and by “solving” the problem, in theoretical CS—a discussion I’m happy to have, but a discussion nonetheless!Comment #91 April 23rd, 2021 at 11:29 am

Richard Cleve #87:

How about if you change the teapot problem so that the output space is exponentially large. For example, outputting the sizes of the shards (in terms of their mass)?

Good question! I thought about that as well. In that case, one would no longer have the knock-down argument that the whole distribution can be estimated by just explicitly estimating all the probabilities. But one could, of course, fall back on the more general argument from this post: that

(1) if you define the problem as “output the sizes of all the shards, given this atom-by-atom mathematical model of a teapot,” then not even the physical teapot itself will be able to solve it, due to microscopic differences, whereas

(2) if you instead define it as “sample from the probability distribution over lists of shard sizes, given this coarse-grained description of a teapot,” then a classical computer

canefficiently solve the problem, to whatever extent you believe the Extended Church-Turing Thesis.Comment #92 April 23rd, 2021 at 11:32 am

venky #85:

If the teapot problem can be described as a math problem with some very large but finite number of parameters, why is the resulting computation always polynomial, ie why doesn’t it ever become exponential? Scott talked about Avogadro number vs Hilbert space, why doesn’t Hilbert space enter into the teapot math problem as long as the number of inputs are finite.

Once again, because of the Extended Church-Turing Thesis. To whatever extent you can model the atoms classically, a classical computer could simulate the teapot by just keeping track of the position, momentum, and so forth of every last atom. And to whatever extent you need to model the atoms quantum-mechanically … well, to that extent the teapot

literally isa quantum computer, albeit an extremely special-purpose one! 🙂Comment #93 April 23rd, 2021 at 11:44 am

ASimilarQuestion #84:

I suppose the objection now is that it’s impossible to construct ideal fluids in the lab, so we could never demonstrate “bucket supremacy” on this counting problem.

Yes, that’s

preciselythe objection. And it’s a completely devastating objection, not just for fluids but foranyanalog system where you depend for your computational power on the exponential precision of observable parameters like position, momentum, or energy. If it weren’t a devastating objection, then it would actually work to build classical analog computers that violated the Extended Church-Turing Thesis, probably even solved NP-complete or PSPACE-complete problems in polynomial time, by encoding exponentially many bits into continuous parameters. But it’s not just that we have 60+ years of experience that it doesn’t work (see, e.g., my NP-complete Problems and Physical Reality survey or my memrefuting post). The deeper problem is that wealready knowthat continuous idealizations of nature break down if you push them as hard as would be needed to get computational speedups. So for example, there are no ideal fluids satisfying the Navier-Stokes equations because real fluids are made of atoms. And the Planck scale (i.e., your analog computer collapsing to a black hole), if not stuff long before that, places a fundamental limit on the precision with which you could ever hope to resolve distances, times, energies, and momenta.Comment #94 April 23rd, 2021 at 1:07 pm

Scott #90:

“But the existing quantum supremacy experiments really do require a careful conceptual discussion of what we mean by a “problem,” and by “solving” the problem, in theoretical CS—a discussion I’m happy to have, but a discussion nonetheless!”

Correct. And, as I had alluded, that’s an unavoidable discussion.

But people appear to be divided into two sets:

1) Those who realize that the above CS discussion is necessary in order to understand Sycamore’s results from 2019.

2) Those who don’t.

Some people in the second set (somewhat annoyingly) persist with glib, teapot-type arguments. I’m pointing out that it should be easy to bring them into the first set by using Factoring as a bridge. There appears to be no other problem in BQP that combines (a) layperson familiarity and (b) strong belief in being outside P. One needs (a) for the bridge to work.

Comment #95 April 23rd, 2021 at 1:37 pm

Use Factoring #94: I mean, Borcherds says explicitly in his video that he doesn’t have the same problem with Shor’s algorithm! It seems to me that talking about Shor’s algorithm is useful for people who believe that quantum supremacy

could never be demonstrated even in principle, less so for people whose problem is specific to the recent sampling-based experiments.Comment #96 April 23rd, 2021 at 2:20 pm

Im not sure what quantum/classical hybrid algorithm means. In a sense could this imply that ‘classical’ is meant to include both digital and analog? I always thought quantum computing implies the ‘analog’ state of the quantum system. Is it then possible to use both digital and analog computation in conjunction with quantum computation? Like a hybrid classical analog/digital computer that talks with a quantum computer, but the classical analog and quantum analog share physical logic (‘combined’) while the classical digital and quantum/classical algorithms cooperate?

Im hoping that classical analog computers could violate extended Church/Turing thesis if they are ‘combined’ with quantum computing, so as to quantize continuous idealizations of nature. If math makes exact, the inexact aspects of nature (the idealization of the continuous) couldn’t a hybrid algorithm allow a dialogue between quantum and analog computing?

Comment #97 April 23rd, 2021 at 2:29 pm

Marxist Stonethrower #96:

Im not sure what quantum/classical hybrid algorithm means.

What people mean by it in practice is an algorithm that’s “mostly” run on classical computers, but that has one key subroutine that’s done using a QC. However, as I’ve often pointed out, this “hybrid” property is a rather silly thing to emphasize, because for the foreseeable future

everyquantum algorithm will be like that.Comment #98 April 23rd, 2021 at 2:30 pm

Programmable microdroplet array is just many of tiny teapots in a tray. Massive cost effective scaling might give quantum computers some competition.

This came just out:

A molecular computing approach to solving optimization problems via programmable microdroplet arrays April 07, 2021

Comment #99 April 23rd, 2021 at 2:42 pm

Nick Nolan #98: Extended Church-Turing Thesis + years of failures to get clear speedups using special-purpose annealing hardware (I’m about to give a talk, so fleshing out this comment left as an exercise for the reader 🙂 )

Comment #100 April 23rd, 2021 at 2:47 pm

Scott #95: I didn’t find Borcherds to be clear in his objections but I noted three things in his video:

(1) No problem with Shor.

(2) Quibble with Sycamore.

(3) A teapot experiment.

I don’t see how he could have thought that all three problems have well defined inputs and outputs and yet equate the complexity classes of (2) and (3) while placing (1)’s higher than them. He definitely did the equating and implied the placing. So he must have got the definitions wrong.

OR

He thought that the teapot is the best machine to simulate itself (which is correct) and so Sycamore is trivial – which would mean that he either got the definition of “computer” wrong or that he’s unwilling to admit that Sycamore is a computer. The latter would be incredible since he would be implying that Sycamore can only achieve the results that it did for exactly one circuit.

Comment #101 April 23rd, 2021 at 4:34 pm

Boaz Barak #81

” One way to realize the difference is that the Google paper contained extensive analysis showing that their circuit actually solves the computational task they defined. ”

The Google paper displays (in the supplementary material) extensive analysis of what their circuit achieve, actually for different circuit variants (that are more or less difficult to simulate classically).

However, looking at the paper in detail, it seems that no experimental cross-entropy benchmarking (XEB) data is given for the “non-simplifiable full” circuit variant that corresponds the difficult problem for which computational supremacy is claimed.

XEB is expected to be computationally intractable for such circuit (in depth 20) with a large enough number of qubits.

On the other hand XEB should be feasible, even with this circuit, in the 20-40 qubit range.

If this observation is correct, then there is be some gap, larger than what could be expected, between the experimental tests conducted and the mathematical problem that has been defined.

Comment #102 April 23rd, 2021 at 4:36 pm

There is a subtle case to be made for Borcherd’s teapot-supremacy comment.

In that the Sycamore experiment started out by repeatedly smashing & modelling teapot chunks (the calibration stage) before assembling the chunks into a teapot and then smashing the whole thing.

And there weren’t that many chunks in the teapot.

AFAIK it doesn’t invalidate the Sycamore supremacy claim but it needlessly blurs the picture, and it totally misrepresents the device’s computational power.

Why? Because in order to run any but the most contrived tasks on the device (with the gate unitaries that were used in the experiment) you’d have to cancel out all of its quirks or spend most of the circuit moving qubits around (and the max circuit depth is already really low).

Or just have really low fidelity, maybe to such an extent where a classical computer would be competitive?

IMO it’s only the experimental details that lead to the teapot analogy.

Comment #103 April 23rd, 2021 at 5:40 pm

It breaks my heart

To see Scott-ee

Smashing a perfect little pot of tea

With apologies to John Hiatt

https://youtu.be/KnxNe1pFpo0

Comment #104 April 23rd, 2021 at 6:47 pm

Your inability to accept that D-Wave has built something very interesting is a huge blind spot, and it is very peculiar that someone who is in your position continues to just stopper your ears and pretend like they don’t exist, even though they are lightyears ahead of everyone else *in the field in which you are working*. It would be valuable for everyone for you to do a proper analysis of some of their recent results in modeling quantum magnets.

Comment #105 April 23rd, 2021 at 8:41 pm

ASimilarQuestion #84:

> Further, we could pose similar questions using other idealized dynamical systems that are easier to build, but extremely hard to simulate (like a double pendulum). If the correct answer depends on the accuracy of the simulation, then classical algorithms—i.e., numerical integration methods—will still scale exponentially.

With respect, you seem smart enough to see the objection already. The double pendulum is chaotic, so a physical double pendulum cannot compute anything useful about a mathematical model of itself.

Suppose you build a physical double pendulum and very carefully measure its physical parameters (length and mass of the arms and an initial position to release them from) to use as parameters in the idealized ODE model of a double pendulum. Will that physical double pendulum then compute how the ODE model of itself behaves with those parameters over a longer period of time than numerical integration of the ODE model? No, not even probabilistically in any useful way (i.e., not smearing out over the entire range of the pendulum), because it’s chaotic.

With Sycamore, I gather the unitary matrices might have been measured or calibrated from the physical device, but then Sycamore computes “idealized BosonSampling using these specific unitaries” faster than a classical computer computes it. (At least, that is the supremacy claim.)

You will not find a structurally analogous result for any chaotic physical system, because physical chaos is useless for computation of idealized models. The quantum supremacy claim is about computing the result of an idealized mathematical model that is hard to compute, not about computing the result of a physical system that is hard to predict.

Comment #106 April 23rd, 2021 at 10:59 pm

By the way, in case anyone missed it earlier this month, there is a new Wolfram Physics Turing test up:

https://writings.stephenwolfram.com/2021/04/the-wolfram-physics-project-a-one-year-update/

The Turing test is to figure out whether it’s an actual article with content, or output of GPT-3 seeded with physics jargon. I confess to being stumped. 😉

Comment #107 April 24th, 2021 at 8:27 am

Michele Sim #104: It’s not as though I’m a lone voice in the wilderness on D-Wave anymore! The entire field seems to have lost interest 4-5 years ago, as devices started becoming available with fewer qubits but 10,000x better coherence times and that were capable of much more than just annealing. I actually think D-Wave is doing some interesting things these days — their recent papers on solving physics problems with their device looked cool, whether or not they’re beating the best that can be done with a classical computer (which I won’t know until the rest of the community spends significant effort on it). Unfortunately for D-Wave, they have a long way to go in regaining the credibility they lost from their early days of one wild/irresponsible claim after another, and indeed, the wild/irresponsible claims they’ve continued to make about e.g. helping Volkswagen solve traffic optimization problems.

Comment #108 April 24th, 2021 at 10:31 am

Scott #107: D-Wave’s claim of helping Volkswagen with traffic optimization caused me to realize that the notion of a “commercial application of quantum computing” is actually a bit more subtle than it seems. I actually think it’s not inconceivable that (a) Volkswagen really could be using the D-Wave computer in its traffic optimization modeling, and (b) they have a sound business reason for doing so. It’s just that the sound business reason has absolutely nothing to do with D-Wave’s actual computational capabilities!

Specifically, I could imagine that Volkswagen occasionally uses D-Wave for some totally trivial purpose, e.g. effectively serving as a random number generator subroutine in some stochastic classical optimization algorithm, or solving a tiny optimization problem that’s easy to solve classically. And their real motivation for doing so is actually just for advertising or PR purposes (e.g. persuading poorly informed customers/shareholders/whomever that they’re at the cutting edge of tech). And due to the odd incentives set up by a system that encourages maximizing shareholder value, there might be some sense in which this is rational behavior. One could argue that this technically does indeed count as a “commercial application of quantum computing” – albeit not one that a computer scientist would find very satisfying – in the sense that it could be indirectly boosting the company’s revenue, just as their advertising and PR departments do. (Of course, this revenue boost won’t be sustainable in the long run, unless quantum computers really do eventually deliver new capabilities.)

All that I’m getting at is that the presence of market irrationality around quantum computing might induce “serious” companies to not only

investin quantum computing, but to actuallyusequantum computers in their business operations, before quantum computers actually deliver anydirectpractical advantage over classical computers. So commercial utilization won’t necessarily be a good signpost that quantum computers have finally demonstrated “useful advantage” in any sense of the word “useful” that a computer scientist would find satisfying.Comment #109 April 24th, 2021 at 1:50 pm

Ted #108: The trouble is, though, you could also “derive a PR benefit” from using astrology or homeopathy, if those were fields that enough of your customers or investors cared about. I.e., the bar of deriving a PR benefit is so low that

anythingcan pass it under the right circumstances, regardless of its intrinsic qualities. If you like, the goal of calling bullshit is to affect the world, bymaking it harderto derive PR benefits from stuff that doesn’t intrinsically work!Comment #110 April 24th, 2021 at 3:06 pm

Scott #109: Yes, I 100% agree, and I’m certainly not defending trivial uses of NISQ quantum computers as actually being socially beneficial on net. I’m just saying that one might naively suppose that once “serious” companies are actually using NISQ computers in business operations, that will finally resolve once and for all the question of whether QCs can be “more useful” than classical computers. But unfortunately, even this development won’t actually resolve that question, due to the perverse incentives that QC hype has created.

In fact, if I had to guess, I’d say that in a few years, major companies will be regularly paying quantum computing startups to rent time on their NISQ computers, and will be loudly advertising that they’re doing so. (I don’t think that very many established companies have actually been paying out money externally so far, although I’m not sure about that. They presumably won’t reveal the actual values of these contracts, but if they do, then it will be enough money to sound impressive to a layperson while still being a completely negligible fraction of their operating budgets.) And yet actual computer scientists will still be debating whether there’s any evidence of a true practical quantum advantage. And the general public will be even more confused than they are today.

Comment #111 April 24th, 2021 at 8:26 pm

Ironically, unless scalability is demonstrated, it may turn out that one of the few useful things a quantum computer can do is compute the result of physical systems that are hard to predict, eg small molecular systems, with a few hundred degrees of freedom.

Comment #112 April 25th, 2021 at 12:40 am

Michele Sim #104,

Is it clear that D-Wave products actually do anything at all? I recall the annealing calculation a few years back where a DW + a PC turned out to be less powerful than a PC.

Comment #113 April 25th, 2021 at 8:57 am

Ted #110 It seems improbable to me that serious companies will use quantum computers in day to day business operations without applications that are clearly in the quantum advantage regime. Quantum computers are to expansive to be used in any other way.

Comment #114 April 25th, 2021 at 9:09 am

Thank you for your patience Scott, I think I understanding your arguments and mostly agree.

What is still a little surprising to me is your faith on the extended Church-Turing Thesis, why not believing in the regular Church-Turing thesis? I.e. assume that the concept of quantum computers just “smuggle the exponentiality” in a un-physical way?

Comment #115 April 25th, 2021 at 10:20 am

bertgoz #114: I’m not sure I understand your question. I

dobelieve the original Church-Turing Thesis, which is the version about computability. The Extended Church-Turing Thesis is the version about polynomial-time computability, and quantum computing clearly poses a serious challenge to it — in my view, no one has yet articulated any plausible way that QM could break down in a way that would render QC impossible (although obviously we can’t know for sure until we try, and that’s one of the main reasons to try!). Certainly there’s nothing known like the Planck scale for analog computing. Anyway, I’ve written at length about these things in QCSD and elsewhere.Comment #116 April 25th, 2021 at 11:54 am

Perhaps a different take is the issue of verification. At the end of the day, we want our computing devices to do something useful, and that utility implicitly constitutes some form of verification.

I define verification very loosely. If a computer is simulating molecular interactions, maybe you don’t care if the result is exact. All you care about is, say, that the drug you design works.

Likewise, I don’t think we really know that Google sampled from the correct distribution. Instead we just know that they passed a particular test.

What does verification look like for the teapot example? We could try running the experiment a second time and make sure the outputs match. Good luck! Or we could just eyeball it and confirm that the shard pattern looks plausible. But a classical computer could easily fool such eyeballing.

So the goal of any supremacy claim should be to specify a task and a test for whether you pass the test. Then a computing seduce demonstrates supremacy if it can pass the test much more easily than a classical computer can. This is where Google succeeds and the teapot (and all the examples based on chaotic physical system) fails.

Comment #117 April 25th, 2021 at 1:26 pm

@TED 110:

As in the previous discussion about quantum hype, whenever people read ‘someone is being fooled’ into a situation involving quantum hype, they’re usually looking at someone who is taking a calculated risk.

One thing that is very different in a commercial environment rather than an academic one is that you don’t build components in order of ‘interestingness’; you try to parallelize as much as you can. It can make sound business sense to start building all the stuff you need around the quantum computer, before you actually have the quantum computer, or even if you’re not sure yet you’ll ever have a quantum computer. The cost in manpower on building all the peripherals just goes into the calculation of risk vs payoff.

Comment #118 April 26th, 2021 at 1:31 pm

Hello, Scott! How do you feel about quantum computing without definite causal structure? Can quantum computation with indefinite causal structures potentially give an advantage over classical quantum?

Here are articles with this idea: Quantum computations without definite causal structure (G. Chiribella, G. M. D’Ariano, P. Perinotti, B. Valiron), Quantum computation with indefinite causal structures (Mateus Araújo, Philippe Allard Guérin, Ämin Baumeler), Computational advantage from quantum superposition of multiple temporal orders of photonic gates .

Comment #119 April 26th, 2021 at 1:59 pm

Askhab #118: I’ve already answered many, many versions of that question in earlier comment sections. The short answer is no: if we’re talking about standard QM (leaving aside quantum gravity), then “indefinite causal structure” could be seen as just a high-level programming abstraction for ordinary quantum circuits. Thus, anything you can do in this setting—regardless of what “speedup” it might seem to get—is ultimately going to be “compiled down” to an ordinary quantum circuit with a definite causal structure when it’s actually physically executed, meaning that there can’t be speedups compared to the latter. This is just an instance of a much more general principle, that you can’t get asymptotic speedups from programming abstractions, but only from new algorithms or from new physical hardware that changes the underlying rules (as QC itself does).

Comment #120 April 26th, 2021 at 11:03 pm

Hints of a new kind of statistical mechanics! See previous issue of ‘New Scientist’ , feature article

http://www.newscientist.com/article/mg25033300-300-quantum-computers-are-revealing-an-unexpected-new-theory-of-reality/

“A powerful new idea about how the laws of physics work could bring breakthroughs on everything from quantum gravity to consciousness, says researcher Chiara Marletto”

A statistical mechanics based on counterfactuals suggests a new kind of stat mechanics that applies at all levels, not just the classical level.

—

It seems to me now that quantum mechanics, statistical mechanics and space-time physics (inc. relativity) should perhaps all be regarded as equally fundamental.

Comment #121 April 27th, 2021 at 4:37 am

Scott or anyone, do you have any idea what is happening in emergent gravity theory (if that’s the right term) these days? Some years back Verlinde had a theory of gravity emerging from quantum entanglement, that gave modified (MOND?) dynamics and supposedly explained dark matter by saying gravity actually wasn’t quite inverse-square with distance or something like that. This got some attention but then new astronomical observations arrived that weren’t compatible with the theory, so afaict the theory went away. More recently though, Sean Carroll and others have been posting about GR coming directly from a “rabid Everettian” version of entanglement, but it’s not clear to me who was working on this or what its status was. Is there enough to it for gravity experts to have any response? Just wondering.

Comment #122 April 27th, 2021 at 6:51 am

Scott #91:

Bring in “belive in Extended Chruch-Turing Thesis” seems like a tautology in some interpretation of the supremacy result:

“My QC breaks Extended Chruch-Turing Thesis. Your classical proposal can’t because Extended Chruch-Turing Thesis.”

More interestingly, it seems that the classical attempts at breaking ETC mostly centers around computing a useful number. I wonder what we know about breaking ETC in the quantum supremacy analog, where we are trying to output a useless sample. Say let’s sample from the evolved state of a chaotic system to some precision, averaged over a Gaussian distribution of inputs. The interesting question here is whether it is possible to construct a chaotic system such that a physical system has an exponential speedup over simulations over input/output precisions until Avogadro number/Plank scale. Because of how big Avogadro number/Plank scale is, such system would have exponential speedup in practice.

Comment #123 April 27th, 2021 at 9:05 am

Hello Scott,

Regarding your proposed flower pot experiment: in geology and mining engineering, there has been some related work in probabilistic models of “comminution,” i.e., what happens to rocks or rubble when you crush or grind them further. For better or worse, the military has also been interested in questions of “fragmentation dynamics” — when you blow up an artillery shell, into how many and what size of pieces does it tend to explode? Apparently, N.F. Mott did seminal wartime work in this area—see for instance “Fragmentation of Shell Cases,” Proc. Royal Soc. 189: 300 (1947). For more recent theory, check out for example “Prediction of fragment number and size distribution in dynamic fracture,” by L. Zhang, et al., J. Phys. D: Appl. Phys. 32: 612 (1999), or “Fragments of matter from a maximum-entropy viewpoint,” by R. Englman, J. Phys.: Condens. Matter 3: 1019 (1991).

Anyway, the very fact that folks try hard to model these sorts of probability distributions tend to support your view that the systems themselves do not offer particularly efficient or convenient analog computational models of their own fragmentation behavior.

Comment #124 April 27th, 2021 at 10:03 am

asdf #121: Sorry, I don’t know much of anything about emergent gravity a la Verlinde, although certainly the idea of spacetime emerging from entanglement structure a la AdS/CFT remains extremely popular.

Comment #125 April 27th, 2021 at 10:07 pm

#121, #124

I’ve settled on a somewhat unusual metaphysics. Before physics, comes metaphysics 😉 One must get that straight.

There’s a ‘Zen’ position one can take, where one recognizes that the distinction between ‘fundamental’ and ’emergent’ is not always clear-cut. Now in 60s and 70s, a guy called Geoffrey Chew proposed what is known as ‘The Bootstrap model’ in particle physics. That turned out to be wrong for particle physics, but the general philosophical idea could still work at a deeper level.

So the idea is that there’s no actual fact of the matter as to whether certain things are ‘fundamental’ or ’emergent’. There’s simply a coherent ‘loop’ of things where reality ‘bootstraps’ itself. So entities could be regarded as both fundamental or emergent depending on how they are being modelled.

So I’ve come to think it’s something like this for basic physics properties: mass-energy, space and time. You simply have 3 physics ‘types’ that are both fundamental *and* emergent in the sense that each could be constructed from or defined in terms of the other two. And I think, for each ‘type’, there are 3 corresponding ‘physics pictures’ : quantum mechanics (mass-energy), relativity (space-time) and statistical mechanics (arrow of time).

So if I’m right, in one picture, it would be valid to regard space-time as ’emergent’ from statistical mechanics and quantum mechanics, but in another picture, space-time can be treated as ‘fundamental’ ! It sounds a bit paradoxical, but this is what the bootstrap idea entails.

And I suspect that several major philosophical issues will end up resolving this way. For instance, the question as to whether time is fundamental or not. I just don’t think there’s a fact of the matter. In one picture of reality, you could take time to be fundamental, in another , it’s emergent.

As Wolfram physics researcher Jonathan Gorard said in a recent tweet,

“I’d go so far as to conjecture quite generally that any “emergent” effect may be naturally recast as a “fundamental” effect, if one simply chooses the right set of underlying primitives.”

Comment #126 April 28th, 2021 at 5:27 am

Regarding the bucket experiment discussion up around #73 and following: It took me a long time to get the distinction between predicting and simulating. Here are some of the examples that helped me get it:

(1) A snowflake falling through the area exhibits sensitive dependence on initial conditions; you are never going to be able to measure the atmospheric conditions and predict where and how it will land. But Hollywood has no problem coding up CGI snowstorms that look right to viewers, when simulating a snow storm is less expensive than dropping giant buckets of snowflakes on the set.

(2) You are probably not going to be able to make a computer which can look at a video of a die tumbling across a table and predict which side it will land on. But this doesn’t stop online casinos from using random number generators; their fake dice are (hopefully) indistinguishable from real ones.

(3) We wonder whether AI will ever pass the Turing test and converse as well as a human being. If they do, that doesn’t mean they will be able to predict what some particular human would say!

Similarly, it would be quite easy to make a simulation which produces virtual buckets whose ripple counts resemble real buckets, even though they won’t be able to predict what a particular bucket will do.

Comment #127 April 28th, 2021 at 9:05 am

It’s always cracking me up to read things like (from Scott’s article on QC):

“According to our current understanding, they would provide dramatic speedups for a few specific problems—such as breaking the cryptographic codes that are widely used for monetary transactions on the Internet. “Yea, the world needs that…

Comment #128 April 28th, 2021 at 9:32 pm

Scott #124, I don’t know whether Verlinde’s theory is in the AdS/CFT framework but I do know “QM=GR” is still around. What I wonder is whether any of the gravity “establishment” (string theorists etc.) take it seriously. If John Baez is around: any idea?

QM=GR helps with QC skepticism in a way. One could ask how a tiny localized thing like quantum entanglement, whose coherence is destroyed by the tiniest puff of contact with the environment, could access seemingly exponential resources. There would have to be a heck of a lot of energy down there somewhere. But if you add the idea that high-dimensional entanglement also entails the forces that move galaxies around, QC becomes a little bit easier to believe.

Mjgeddes the bootstrapping concept (in my limited understanding) doesn’t seem that meaningful, in that you can say the same things about stuff in math, which doesn’t really have causation like we like to imagine exists in physics. Does analysis emerge from geometry, or the other way around? At most we can say you can’t really have one without the other.

Comment #129 May 1st, 2021 at 3:26 am

Came here to say that the problem of how an elastic brittle teapot cracks into pieces is mathematically well defined, but hard numerically. There is a whole mathematical and numerics field around smashing teapots. All starts from the Mumford-Shah functional (where Mumford is the great David Mumford),

https://en.wikipedia.org/wiki/Mumford%E2%80%93Shah_functional

I know this because a long, long time ago it was a part of my phd thesis, for example the article Energy Minimizing Brittle Crack Propagation, J. of Elasticity, 52, 3 (1999) (submitted in 1997), pp 201-238

free pdf here: http://imar.ro/~mbuliga/brittle.pdf

Comment #130 May 5th, 2021 at 1:25 pm

This is a link to a recent D-Wave Nature publication, showing (in my opinion) much more in terms of quantum advantage than any other quantum computing system built to date. Do you agree?

https://www.nature.com/articles/s41467-021-20901-5.epdf?sharing_token=-jK_bySvNsC8pg%5B%E2%80%A6%5DOBIYQuQffvB40RDQgMposhxOn-o85Juo9gRzHshn-N-NGpGsllDbigbJmY%3D

Also, I was looking at revenue from quantum computer use in a recent Gartner report, and D-Wave as of Q4 owns 92.2% of worldwide QC revenue. So there’s that.

Comment #131 May 7th, 2021 at 12:06 am

https://www.youtube.com/watch?v=J0p_thJJnoo I think there are flaws in the Chollet’s thoughts. On Turing completeness (he is not considering increasing circuit sizes), hash table interpretation and etc etc. Since it is theoretical you might find the arguments entertaining and perhaps you can post your thoughts on the blog.

Comment #132 May 8th, 2021 at 12:21 pm

Maybe we should just strengthen our teapots and be done with it.

Comment #133 May 9th, 2021 at 12:23 am

#131

Chollet is a very very smart guy, one of the few people I follow. Chollet, in a sense, solidified the final pieces of my metaphysics. Chollet talked about topology-based cognition versus geometry-based cognition, and I think he’s right. Read what follows.

I think the metaphysics of reality (including the nature of complexity, computation, time, intelligence and values/alignment!) is all so simple. What’s remarkable is how simple it all is. And by implication, how incredibly poor us humans really were at philosophy. I mean it could probably have been worked out long ago. And it should have been.

—

So, I guess the core postulate I would make is that there is a ‘space of possible worlds’ (so this is modal realism), and this is the scaffolding to the ‘computational layer’ of reality, which is still under construction (this is a growing block universe picture). The construction of reality is not yet complete, and computation is the incomplete layer! This computational layer amounts to paths through the space of possible worlds.

Computation is equivalent to *time*. So the concept of time was a computer science concept all along! Time cannot be understand within the context of the ordinary physics picture. And time manifests itself in 3 ways: as causality, as compression and as compositionality. And that’s equivalent to the geometry, iterated functions and topology, respectively, of the paths through the space of possible worlds. So *particular* worlds are becoming actualized from the space of *possible* worlds, and this happens at the computational level. And this is the ongoing construction of reality.

The math and physics layers of reality are the backdrop (the possible worlds) of reality, from which emerges a 3rd layer, the computational layer, which is time (particular worlds or paths through the space). To exist *is* to be computable, to be reducible, to be comprehensible! And the more comprehensible something is, the greater the measure of its existence.

Cognition (or mind) is the mapping of the paths through the space of possible worlds, based on measures of the 3 arrows of time, which for causality, compression & compositionality, are probability, complexity & truth values, respectively.

For the 3 arrows of causality, compression & compositionality, the corresponding cognition is perception, action & reasoning, respectively. And, this matches what Chollet said about geometry-based cognition (which is perception) versus topology-based cognition (which is reasoning)

—

The picture I’ve outlined is essentially the physical interpretation what homotopy type theory and the univalence axiom says. And I think that’s it. That’s everything.

Comment #134 May 10th, 2021 at 12:47 am

[…] those who read my reply to Richard Borcherds on “teapot supremacy”: seeking better data, I ordered a dozen terra cotta flowerpots, and smashed eight of them on my […]

Comment #135 May 10th, 2021 at 2:50 am

#133 Not sure. The field AI is on unfounded foundations. At least complexity has analogies without which much of the field is bogus. For example you really think P\neq NP, Mulmuley’s program is going to succeed etc etc. Every field has a God figure. There was one Turing award winner for Complexity theory. AI does not seem to have one. I wish I can get a chance to get in and put my head into AI so I can figure which of which is bs and which of which is meaningful. Chollet might be able to see certain things which he considers as important and big picture. Hard to say if every Tom Dick and Harry can be Einstein.

I really wish Professor Aaronson can comment on Chollet’s views by watching his videos (https://www.youtube.com/watch?v=mEVnu-KZjq4) after all Professor Aaronson is a Bayesian and so an AI researcher in spirit.

Comment #136 May 10th, 2021 at 9:17 am

I really prefer Professor Aaronson to *NOT* stoop below and write clarification on a non-expert opinion of ‘teapot supremoconsipracy’ theory and increase his influence on fertile areas on seemingly genuine knowers who had given thought in the field of AI. Right now there is a single figure Professor Sanjeev Arora but his way of things are different to Professor Aaronson’s preferred philosophical route which is right now needed in learning and AI and if Professor Aaronson is in others would follow enriching the possibilities for the future. Looking forward to Professor Aaronson’s opinion.

Comment #137 May 11th, 2021 at 3:13 am

An entertaining post, but in the end it’s just another crack pot theory!

Comment #138 May 14th, 2021 at 5:44 pm

@Michelle Sim:

That’s not quantum advantage, because it only compares the quantum algorithm to one classical algorithm.

Comment #139 May 20th, 2021 at 3:10 pm

I have a proof of width-5 branching programs capturing significant bit of permanent. How to publish the result?