I wish all the people who are spending time creating scores of new pronouns and P.C. euphemisms would focus on a more worthy task: making the English language quantum-friendly. Then it might even be possible to teach quantum theory to 5-year-olds!

]]>If you’re going to grossly misrepresent what I wrote, there’s no point in continuing the discussion. Where is your self-respect?

Let’s recap, because you’re being very evasive here.

You open by saying that treating amplitudes as probabilities is wrongheaded. I say that it’s useful in understanding why QCs are faster than classical machines.

You react with disbelief. I attempt to explain why it’s helpful to consider amplitudes as probabilities in order to capture the quantum speedup.

You double down on your disbelief and point to the vector size as the basis for the quantum speedup – why look any further? I point out that this is a popular misconception since classical systems can efficiently manipulate equally large vectors and that you need to go further.

Why is going further important? Because you’re leaving out interference, which is what *negative probabilities* adds to the classical picture.

Terry is adding signs to physical objects in order to help develop an intuition for quantum computing but apparently adding a sign to a probability is just going too far for you. And why?

The probabilities are the things you want to add the signs to because they are implicit.

]]>I don’t need to look at your list for that basic misconception, i can just read it off of a pop-sci magazine.

If you’re going to grossly misrepresent what I wrote, there’s no point in continuing the discussion. Where is your self-respect?

]]>The risk of losing -progressively until relatively late in our educational lives- the mental plasticity required for understanding quantum theory is a very real risk. But it is not “brain age” alone that plays such a part.

It is also the academic context, in which someone learns to operate, that makes him/her less open to nonconformist ideas. Νonconformist in content, but also nonconformist as to who expresses them. In other words, the “more” expert someone becomes in one field, the deeper he/she enters in the circles of his/her expert peers. And that is a good thing, it is how science makes progress. But when it comes to answering big, philosophical questions, one should be open to as many ideas as possible, whoever provides them.

Of course, I am not talking about spending resources in listening to what anyone has to say about QM after having seen only a couple documentaries on the topic (maybe in an ideal world). But I believe we could widen the group of people talking about it if we offered more chances to grad students or even undergrads and amateurs (with some level of quantum-physics maturity that needs to be proven) to formally express any innovative ideas they have on e.g. the interpretation of Quantum Mechanics!

I know that there is a good reason why someone is considered an expert, but a true expert knows that a great idea can be found anywhere. And I am not that romantically optimistic as to expect a Ramanujan-Hardy kind of collaborations, but you never really know what to expect in any field as long as you are willing to consider new, serious ideas from (almost) wherever they come from.

]]>And maybe Kevin can implement an equivalent solver using TensorFlow.

They’ll both be slow that’s for sure, but the random-number generator’s abstraction yields a polynomial-time algorithm since all of the real work (the interference) happens implicitly.

And that’s the reason why it’s useful to consider amplitudes as probabilities in the context of quantum vs classical. You don’t have to explicitly track a huge vector, but you still get the interference needed to solve some difficult problems.

If the concept of negative probabilities bothers you so much then think of them as [x, y] pairs, where x is a boolean and y is a probability.

]]>Take a look again at my list of what I think should be emphasized. The first item hints at why a QC can be faster: “You aggregate qbits using the tensor product; hence O(n) physical resources (n qbits) yields a quantum state vector of length 2^n.”

I don’t need to look at your list for that basic misconception, i can just read it off of a pop-sci magazine.

The vector size by itself says nothing of the power of a quantum computer. In fact there are several quantum gate sets that can be efficiently simulated by a classical computer, for state spaces as large as you want.

You have to go further than that. Once you get there you might realize that:

]]>Oh, yeah, I can see how negative probabilities might allow you to solve in polynomial time a problem that takes a classical computer exponential time

The concept of “negative probabilities” is meant to describe why a QC is faster, not how it works.

And the oxymoron of “negative probabilities” utterly fails at this task. Seriously, who in the world thinks, “Oh, yeah, I can see how negative probabilities might allow you to solve in polynomial time a problem that takes a classical computer exponential time”?

Take a look again at my list of what I think should be emphasized. The first item hints at why a QC can be faster: “You aggregate qbits using the tensor product; hence O(n) physical resources (n qbits) yields a quantum state vector of length 2^n.”

The third and last items hint at why getting that speedup is tricky, and why a QC (probably) can’t solve NP-complete problems in polynomial time.

I think the analogy with a neural network classifier is actually worse.

That analogy is not meant to explain how QC works, only to emphasize that all the important stuff happens before probabilities ever enter the picture.

]]>These FFT articles provide further concrete grounds to wonder — as prior *Shtetl Optimized* comments have wondered — why was 20th century STEM-progress immensely *slower* than might reasonably have been strategically projected and practically achieved?

Am I understanding this correctly? Or is there a way to simulate precise tiny phase shifts on a quantum computer that only supports large shifts?

The Solovay-Kitaev theorem does basically that. It shows that any single-qubit gate can be efficiently approximated with a sequence of gates drawn from a universal set, up to the desired precision.

Shor’s paper also addresses this problem using another result:

When k − j is large in the gate S_j,k in (4.3), we are multiplying by a very small phase factor. This would be very difficult to do accurately physically, and thus it would be somewhat disturbing if this were necessary for quantum computation. Luckily, Coppersmith [1994] has shown that one can define an approximate Fourier transform that ignores these tiny phase factors, but which approximates the Fourier transform closely enough that it can also be used for factoring.

I would like to see a Bloch sphere visualization showing how such a tiny rotation can be approximated using a typical universal gate set.

]]>