## More quantum computing popularization!

I now have a feature article up at Quanta magazine, entitled “What Makes Quantum Computing So Hard To Explain?” I.e., why do journalists, investors, etc. so consistently get central points wrong, even after the subject has been in public consciousness for more than 25 years? Perhaps unsurprisingly, I found it hard to discuss that meta-level question, as Quanta‘s editors asked me to do, without also engaging in the object-level task of actually explaining QC. For regular Shtetl-Optimized readers, there will be nothing new here, but I’m happy with how the piece turned out.

Accompanying the Quanta piece is a 10-minute YouTube explainer on quantum computing, which (besides snazzy graphics) features interviews with me, John Preskill, and Dorit Aharonov.

On a different note, my colleague Mark Wilde has recorded a punk-rock song about BosonSampling. I can honestly report that it’s some of the finest boson-themed music I’ve heard in years. It includes the following lyrics:

Quantum computer, Ain’t no loser
Quantum computer, Quantum computer

People out on the streets
They don’t know what it is
They think it finds the cliques
Or finds graph colorings
But it don’t solve anything
Said it don’t solve anything
Bosonic slot machine
My lil’ photonic dream

Speaking of BosonSampling, A. S. Popova and A. N. Rubtsov, of the Skolkovo Institute in Moscow, have a new preprint entitled Cracking the Quantum Advantage threshold for Gaussian Boson Sampling. In it, they claim to give an efficient classical algorithm to simulate noisy GBS experiments, like the one six months ago from USTC in China. I’m still unsure how well this scales from 30-40 photons up to 50-70 photons; which imperfections of the USTC experiment are primarily being taken advantage of (photon losses?); and how this relates to the earlier proposed classical algorithms for simulating noisy BosonSampling, like the one by Kalai and Kindler. Anyone with any insight is welcome to share!

OK, one last announcement: the Simons Institute for the Theory of Computing, in Berkeley, has a new online lecture series called “Breakthroughs,” which many readers of this blog might want to check out.

### 21 Responses to “More quantum computing popularization!”

1. Peter Nelson Says:

Rocket from the Tombs! My compliments to Prof. Wilde on his performance and taste in (obscure) music.

2. LK2 Says:

arxiv.org/abs/2106.03747

which was tweeted also by Preskill? Maybe this is not the place for a discussion, but maybe might inspire a post about quantum machine learning…

Your outreach activity is outstanding: keep it up Scott!

3. Jake Bulmer Says:

There was a nice paper recently by Drummond, Opanchuck and Reid which also presented methods to approximately calculate probabilities for Gaussian Boson Sampling experiments with threshold detectors (like the USTC experiment): https://arxiv.org/abs/2102.10341. However, their Monte-Carlo methods have error scalings similar to e.g. Gurvits algorithm for the permanent. Therefore, Drummond et al suggest using these methods as a verification/benchmarking tool rather than a means of simulating experiments.

In the paper by Popova and Rubtsov, it strikes me that their claims do not particularly match up to the results. I think this is partly due to a confusion between the difficulty in calculating probabilities and in drawing samples from a probability distribution. Also, they seem to calculate probabilities for different total photon (or detector clicks) numbers, rather than the probabilities of the (combinatorially many) measurement outcomes.

To claim simulation, I think they need to provide evidence that they can draw samples from the correct distribution. If they are exploiting errors in the experiment, there should be some direct comparison between the noisy samples of the experiment and the noisy samples of the simulation, but I do not see anything like this in the manuscript.

4. Paul Hayes Says:

“Let’s start with quantum mechanics” indeed. Why do journalists so consistently get central points about QM wrong? Because, rightly, they’re deferring to the ‘experts’. They – the physicists – are the ones ultimately responsible for the persistence of all the “two places at once” nonsense, “werewave” talk, “ping pong ball test” failures etc. Not to mention the inapt comparison between a classical deterministic bit and a qubit (the appropriate comparison being between a classical probabilistic bit and a quantum probabilistic bit).

5. Arul Says:

In my opinion a field is hard to explain if it cannot be reduced to fundamental organizing tenets (for example set of basic foundational rules). As an engineer I just know from Professor Aaronson’s blog BQP is not in NP but I just know it because I read it and I know he is an authority in the field. To really understand why the basic intuition is ‘it does not work by trying all paths’ and so we know if something does not try by all paths it cannot capture NP. The right way to explain should be quantum computing works through the basic principles 1,2,3… and if they do these it appears unlikely to capture NP. But to work through 1,2,3,… requires work and is not elementary one line argument similar to ‘if we had factors we can verify but why is it hard to find factors and why the problem goes one way’. This comes back to basic tenet about using analogies in understanding which you (unless you are someone like Grothendieck who can grasp things abstractly) have been using since childhood. If factoring is a good grasping example for P vs NP what is a good grasping example to understand the analogies to BQP vs NP. I have never seen this and if there is one it would be fairly straightforward for most people to get the intuition. Many fields are hidden because there is no proper working example to place the fields in the common platform of community.

6. YD Says:

> If factoring is a good grasping example for P vs NP what is a good grasping example to understand the analogies to BQP vs NP.

Factoring 😛

It is a terrible example for P vs NP because it helps perpetuate the myth that quantum computers can solve NP-complete problems. But it is a good example of BQP vs NP.

7. Mordechai Rorvig Says:

I greatly look forward to reading and watching these. Thanks! But as someone who has probably gotten these points wrong in the past, I suspect I will probably continue to do so, in the future. But I for one will be grateful for the fact that I no longer have an excuse. 🙂

But in seriousness, I think the main reason why journalists and others get this stuff wrong is due to lack of time, or perhaps more accurately, lack of investment in the time it would take them to learn it. The amount of time journalists are often paid for to write an article is really astonishing. A journalist on a physics beat might be told at 10 am: Go write an article on this new Quantum Thingamajig paper, which covers a domain that you have never seen. File by 4.

Journalists with some grounding or experience in physics can actually pull this off rather well. But some degree of error and inaccuracy is inevitable, particularly for the most difficult topics, of which anything quantum is surely one. But there are also a lot of writers and journalists who have very little grounding or experience, who have never taken a calculus class, (let alone understood a wave function!), or who might be writing their very first article. These folk are scraping the gum off the tables, rifling around in the trash bins … the sloppiness of their work is 100% unavoidable!

8. Job Says:

If factoring is a good grasping example for P vs NP what is a good grasping example to understand the analogies to BQP vs NP.

One analogy that I find interesting is that of a machine which, given a circuit, is biased towards execution paths that result in an even number of total operations.

The circuit would still be deterministic, so if the machine receives an input that produces an odd number of operations, then it has no option but to do that.

However, suppose it’s the machine that picks the input, and it picks inputs that produce an even number of operations, as much as possible.

It’s a fictional machine, but how would we use it?

For an NP-Complete problem like SAT you could have a circuit that checks whether the input is a solution, and then if it is, pads its own execution to an even number of operations. Otherwise it pads to an odd number.

Then we run the machine on that circuit and check the input. If the SAT instance is satisfiable, the input will contain the solution.
We can even ignore the output.

IMO, quantum computers are kind of like this, but less powerful.

They only take execution paths that don’t cancel out as a result of destructive interference. And it’s more difficult to “pad the execution” to interfere/not-interfere as needed.

9. Simon Thompson Says:

I have been asked to evaluate the prospects of the application of QC for my employers, so this article (as well as the blog and your interviews on youtube) are super useful. Thanks.

One thing that came out to me that isn’t commonly discussed is that as far as I can discover Quantum RAM is rather difficult to implement, and also loading classical data into it can be slow – I think from my reading that it may even be that loading into QRAM takes exponential time vs the number of bits to be loaded and encoded – even if you have functional QRAM available. I think that some of the algorithms that might be used (like Grovers) for some practical problems need QRAM to work on classical data.

Is my perception true?

10. Craig Gidney Says:

That youtube video is *by far* the best popularization I’ve seen. I think the closest to mistaken that it got was a) simplifying Feynman coming up with the idea and b) showing the Bloch vector randomly jumping all over the place even right after a measurement. (Maybe the constant jumping was supposed to symbolize the qubits being operated on?)

I like how the background lighting up was used to symbolize measurement.

11. Scott Says:

Arul #5: It’s an open problem whether BQP is contained in NP. But as for why BQP isn’t just “obviously” contained in NP—that’s easy to see and you don’t need to take my word for it! It’s simply because the acceptance probability of a BQP computation is, in general, a sum of exponentially many contributions. So, what NP witness would you propose for proving that such a sum is large?

More concretely, in the black-box or oracle setting (please pay attention to that proviso, it’s important!), there are problems like Forrelation, Recursive Fourier Sampling, and the complement of Simon’s problem, which are in BQP but provably not in NP. Those provide concrete examples of things that quantum computers can efficiently do but for which classical computers can’t even efficiently verify the answer let alone find it.

12. Scott Says:

Simon Thompson #9: No, I see absolutely no reason why loading n classical bits using a QRAM would take exp(n) time. The real question is whether it takes n time—in which case the QRAM is still asymptotically useless—or whether it takes only log(n) time, in which case the QRAM is useful. Seth Lloyd and others have had theoretical proposals for useful QRAM’s, ones where the loading would take log(n) time, but none of those proposals have been demonstrated experimentally yet, even to limited extent that QC itself has been.

Some people argue that useful QRAMs are fundamentally impossible because of memory latency and the finite speed of light. But the same argument, if applied consistently, would also invalidate the classical RAM model! It seems to me that, in principle, someday you could have QRAMs for which O(log(n)) data retrieval is just as good an approximation in practice as it was in the classical case.

If you’re applying Grover’s algorithm to (say) a combinatorial search or optimization problem, which are the main applications that many people envision, then you don’t need a QRAM. If, on the other hand, you wanted to use Grover’s algorithm to search an actual physical database, then you would need a QRAM.

13. Simon Thompson Says:

Thank you very much for the response, this is extremely helpful and clarifying for me. I will go and read Seth Lloyd’s work to see if I can get a better grasp!

14. Arul Says:

Yes but in “More concretely, in the black-box or oracle setting (please pay attention to that proviso, it’s important!), there are problems like Forrelation, Recursive Fourier Sampling, and the complement of Simon’s problem, which are in BQP but provably not in NP. Those provide concrete examples of things that quantum computers can efficiently do but for which classical computers can’t even efficiently verify the answer let alone find it” you get ‘black-box’, ‘oracle setting’, ‘Forrelation’, ‘Recursive Fourier Sampling’, and ‘the complement of Simon’s problem’, which are in BQP but provably not in NP…..if you tell a high school student these it probably would push them away from the field. These are not as down to earth as factoring, gravity explained as a curvature by analogy to balls on a sheet of cloth etc. If you noticed a majority of the population do not have a college degree and among those who have a college degree a majority are not in science and among the majority who are in science are not in non-bio related areas and amount those who are not specialized in bio those who are in non EE and non CS are majority. So throwing terms like these is not going to engage the population in the right direction unless they see concrete quick direct examples which they can connect. As I said not all are Grothendieck who Serre had to cast a problem in an abstract way which would have pushed off anyone but Grothendieck.

15. Arul Says:

BPP is in BQP and even BPP is not known to be in NP is an easier explanation but these won’t connect to the populace and connecting to the populace was one of the motivations of the quanta article. No one thinks set theoretically on day to day basis unless you are nothing else to do but Bayesian who wants to propose cardinalities to sets which you have no sampling guarantees. People want a foundational example to motivate (factoring is done by BQP is catchy but the point is why cannot other things in NP be and so how to provide an example and let alone an easy example for something that cannot be done .. it is a bit of mystery).

16. Scott Says:

Arul: I do what I can, trying to fill each brain with whatever that brain is currently ready to hold. If you have a better way, do share! 🙂

17. Arul Says:

I do not know and in the examples I provided negatives are removed. How do you explain something that a model cannot do within the operations of a model? The balls nicely illustrate something similar to gravity but it hides certain things geometric conceptions of gravity cannot capture (almost everything about quantization and things emergent) and we ‘try’ our strengths by massaging another model. Very few people can do these things ground up. Of course we know Einstein was one of them. Other philosophies have to emerge or else I would give up say hey you know what the models of operation cannot be taken care of and may be P=BQP=NP (we do not know and we are trying our best with ‘black-box’, ‘oracle setting’ which publishes papers and keeps the ***question*** alive in the culture but avoids the central tenets of the ***question*** itself).

18. Paul Hoffman Says:

Stiv Bators (lead singer of the Dead Boys) would not know what to do with that cover version. I do wonder how many readers of this blog were around and thrashing in 1977…

19. Misha Erementchouk Says:

I believe, the main reason why popularizations fail is that because virtually all of them are based on somewhat weak principles. Since the beginning of the modern (post-Shor’s algorithm) QC era, one of the standard arguments is that QC brings physics to computing. This creates an impression that this is the complexity of quantum mechanics, which is at play, while, in fact, quantum mechanics presents the physical background for realizing the QC model of computation.

To distinguish one (quantum mechanics) from another (QC model of computation), I make a statement: no two-qubit system can demonstrate the emergence of the new model of computation. The formal reason is the “unfortunate” 2^2 = 2 x 2 = 2 + 2. A single qubit system can be faithfully represented by a two-mode oscillator. Coincidentally, any two-qubit system is representable by two two-mode oscillators. All quantum peculiarities, like superposition and entanglement, can be fully demonstrated in this model. Just superposition and entanglement are not sufficient. Moreover, since the classical emulation is not bound by projective measurements, it is way superior to the system of qubits.

Qubits and their classical emulations start to depart from each other already at the next step. Three qubits require four two-mode oscillators. This doesn’t sound too bad, but this is where the difference between the direct products and direct sums kicks in hard. The respective classical emulation of quantum teleportation or dense coding already reveals that not only the number of “physical” carriers scales badly in classical systems, the complexity of controls also blows up.

Thus, quantum computing originates from both: the collapse of the number of carriers _and_ the collapse of the whole trees of controls. Both are consequences of quantum Hilbert spaces of composite systems being direct products, while the respective classical phase spaces are direct sums. This pretty much never presents in popularization beyond mentioning in passing the Feynman contribution.

Last thing. Why only talking about the exponential growth of states falls short. Let the signal level in a circuit be controlled by an N-bit control. This is the routine situation in various digitization systems. The total number of levels is then 2^N. For people with this background, therefore, considering say N=100 is pretty much meaningless. Yes, the number of states is enormous but this means nothing since effectively N will be cut off by some respective information capacity of the subsequent circuitry. I’ve seen myself how explaining quantum computing and bringing the quantum Fourier transform up in fact persuaded a (rather small, though) whole group of people that competitive quantum computers will never be built.

20. Nebil Reyhani Says:

Here is a fun example of exaggerating the future promises of quantum computing which you describe as “cartoonish” in your Scientific American article:
However, when I think that some people will get rich through this hype and some other most probably poor I find myself unable to laugh.
Nevertheless, I could find in authors own interpretation of quantum theory a rather inspiring perspective. The author writes:
“Second, in classical mechanics, objects can only ‘work’ with things that are also ‘real’. You can’t use your imaginary friend to help move the couch. You need your real friend to help you.”
I think the author makes here, even though apparently unintended, a very important point. Could the difference between classical and quantum computers be simply this: Classical computers can rely only on “real” friends, quantum computers, on the other hand, are capable to let “real” sofas be carried away by “imaginary” friends. I admit that this is an oversimplification and we should first ask: what is “real” anyway? However, it’s also striking that we don’t have to ask this question for “real” sofas, “real” friends and “real” bits. As a philosopher I would very much like to know your opinion.

21. Scott Says:

Nebil #20:

Could the difference between classical and quantum computers be simply this: Classical computers can rely only on “real” friends, quantum computers, on the other hand, are capable to let “real” sofas be carried away by “imaginary” friends. I admit that this is an oversimplification and we should first ask: what is “real” anyway? However, it’s also striking that we don’t have to ask this question for “real” sofas, “real” friends and “real” bits. As a philosopher I would very much like to know your opinion.

Einstein famously said that things should be made as simple as possible but no simpler. I think what you wrote beautifully illustrates the necessity of the latter part of his saying. 🙂

Comment Policy: All comments are placed in moderation and reviewed prior to appearing. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, or snide and patronizing tone. Also: comments that link to a paper or article and, in effect, challenge me to respond to it are at severe risk of being left in moderation, as such comments place demands on my time that I can no longer meet. You'll have a much better chance of a response from me if you formulate your own argument here, rather than outsourcing the job to someone else. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.

You can now use rich HTML in comments! You can also use basic TeX, by enclosing it within  for displayed equations or  for inline equations.