Archive for the ‘Quantum’ Category

That IACR preprint

Tuesday, April 16th, 2024

Update (April 19): Apparently a bug has been found, and the author has withdrawn the claim (see the comments).


For those who don’t yet know from their other social media: a week ago the cryptographer Yilei Chen posted a preprint, eprint.iacr.org/2024/555, claiming to give a polynomial-time quantum algorithm to solve lattice problems. For example, it claims to solve the GapSVP problem, which asks to approximate the length of the shortest nonzero vector in a given n-dimensional lattice, to within an approximation ratio of ~n4.5. The best approximation ratio previously known to be achievable in classical or quantum polynomial time was exponential in n.

If it’s correct, this is an extremely big deal. It doesn’t quite break the main lattice-based cryptosystems, but it would put those cryptosystems into a precarious position, vulnerable to a mere further polynomial improvement in the approximation factor. And, as we learned from the recent NIST competition, if the lattice-based and LWE-based systems were to fall, then we really don’t have many great candidates left for post-quantum public-key cryptography! On top of that, a full quantum break of LWE (which, again, Chen is not claiming) would lay waste (in a world with scalable QCs, of course) to a large fraction of the beautiful sandcastles that classical and quantum cryptographers have built up over the last couple decades—everything from Fully Homomorphic Encryption schemes, to Mahadev’s protocol for proving the output of any quantum computation to a classical skeptic.

So on the one hand, this would substantially enlarge the scope of exponential quantum speedups beyond what we knew a week ago: yet more reason to try to build scalable QCs! But on the other hand, it could also fuel an argument for coordinating to slow down the race to scalable fault-tolerant QCs, until the world can get its cryptographic house into better order. (Of course, as we’ve seen with the many proposals to slow down AI scaling, this might or might not be possible.)

So then, is the paper correct? I don’t know. It’s very obviously a serious effort by a serious researcher, a world away from the P=NP proofs that fill my inbox every day. But it might fail anyway. I’ve asked the world experts in quantum algorithms for lattice problems, and they’ve been looking at it, and none of them is ready yet to render a verdict. The central difficulty is that the algorithm is convoluted, and involves new tools that seem to come from left field, including complex Gaussian functions, the windowed quantum Fourier transform, and Karst waves (whatever those are). The algorithm has 9 phases by the author’s count. In my own perusal, I haven’t yet extracted even a high-level intuition—I can’t tell any little story like for Shor’s algorithm, e.g. “first you reduce factoring to period-finding, then you solve period-finding by applying a Fourier transform to a vector of amplitudes.”

So, the main purpose of this post is simply to throw things open to commenters! I’m happy to provide a public clearinghouse for questions and comments about the preprint, if those studying it would like that. You can even embed LaTeX in your comments, as will probably be needed to get anywhere.


Unrelated Update: Connor Tabarrok and his friends just put a podcast with me up on YouTube, in which they interview me in my office at UT Austin about watermarking of large language models and other AI safety measures.

And yet quantum computing continues to progress

Wednesday, April 3rd, 2024

Pissing away my life in a haze of doomscrolling, sporadic attempts to “parent” two rebellious kids, and now endless conversations about AI safety, I’m liable to forget for days that I’m still mostly known (such as I am) as a quantum computing theorist, and this blog is still mostly known as a quantum computing blog. Maybe it’s just that I spent a quarter-century on quantum computing theory. As an ADHD sufferer, anything could bore me after that much time, even one of the a-priori most exciting things in the world.

It’s like, some young whippersnappers proved another monster 80-page theorem that I’ll barely understand tying together the quantum PCP conjecture, area laws, and Gibbs states? Another company has a quantum software platform, or hardware platform, and they’ve issued a press release about it? Another hypester claimed that QC will revolutionize optimization and machine learning, based on the usual rogues’ gallery of quantum heuristic algorithms that don’t seem to outperform classical heuristics? Another skeptic claimed that scalable quantum computing is a pipe dream—mashing together the real reasons why it’s difficult with basic misunderstandings of the fault-tolerance theorem? In each case, I’ll agree with you that I probably should get up, sit at my laptop, and blog about it (it’s hard to blog with two thumbs), but as likely as not I won’t.


And yet quantum computing continues to progress. In December we saw Harvard and QuEra announce a small net gain from error-detection in neutral atoms, and accuracy that increased with the use of larger error-correcting codes. Today, a collaboration between Microsoft and Quantinuum has announced what might be the first demonstration of error-corrected two-qubit entangling gates with substantially lower error than the same gates applied to the bare physical qubits. (This is still at the stage where you need to be super-careful in how you phrase every such sentence—experts should chime in if I’ve already fallen short; I take responsibility for any failures to error-correct this post.)

You can read the research paper here, or I’ll tell you the details to the best of my understanding (I’m grateful to Microsoft’s Krysta Svore and others from the collaboration for briefing me by Zoom). The collaboration used a trapped-ion system with 32 fully-connected physical qubits (meaning, the qubits can be shuttled around a track so that any qubit can directly interact with any other). One can apply an entangling gate to any pair of qubits with ~99.8% fidelity.

What did they do with this system? They created up to 4 logical encoded qubits, using the Steane code and other CSS codes. Using logical CNOT gates, they then created logical Bell pairs — i.e., (|00⟩+|11⟩)/√2 — and verified that they did this.

That’s in the version of their experiment that uses “preselection but not postselection.” In other words, they have to try many times until they prepare the logical initial states correctly—as with magic state factories. But once they do successfully prepare the initial states, there’s no further cheating involving postselection (i.e., throwing away bad results): they just apply the logical CNOT gates, measure, and see what they got.

For me personally, that’s the headline result. But then they do various further experiments to “spike the football.” For one thing, they show that when they do allow postselected measurement outcomes, the decrease in the effective error rate can be much much larger, as large as 800x. That allows them (again, under postselection!) to demonstrate up to two rounds of error syndrome extraction and correction while still seeing a net gain, or three rounds albeit with unclear gain. The other thing they demonstrate is teleportation of fault-tolerant qubits—so, a little fancier than just preparing an encoded Bell pair and then measuring it.

They don’t try to do (e.g.) a quantum supremacy demonstration with their encoded qubits, like Harvard/QuEra did—they don’t have nearly enough qubits for that. But this is already extremely cool, and it sets a new bar in quantum error-correction experiments for others to meet or exceed (superconducting, neutral atom, and photonics people, that means you!). And I wasn’t expecting it! Indeed, I’m so far behind the times that I still imagined Microsoft as committed to a strategy of “topological qubits or bust.” While Microsoft is still pursuing the topological approach, their strategy has clearly pivoted over the last few years towards “whatever works.”

Anyway, huge congratulations to the teams at Microsoft and Quantinuum for their accomplishment!


Stepping back, what is the state of experimental quantum computing, 42 years after Feynman’s lecture, 30 years after Shor’s algorithm, 25 years after I entered the field, 5 years after Google’s supremacy experiment? There’s one narrative that quantum computing is already being used to solve practical problems that couldn’t be solved otherwise (look at all the hundreds of startups! they couldn’t possibly exist without providing real value, could they?). Then there’s another narrative that quantum computing has been exposed as a fraud, an impossibility, a pipe dream. Both narratives seem utterly disconnected from the reality on the ground.

If you want to track the experimental reality, my one-sentence piece of advice would be to focus relentlessly on the fidelity with which experimenters can apply a single physical 2-qubit gate. When I entered the field in the late 1990s, ~50% woud’ve been an impressive fidelity. At some point it became ~90%. With Google’s supremacy experiment in 2019, we saw 1000 gates applied to 53 qubits, each gate with ~99.5% fidelity. Now, in superconducting, trapped ions, and neutral atoms alike, we’re routinely seeing ~99.8% fidelities, which is what made possible (for example) the new Microsoft/Quantinuum result. The best fidelities I’ve heard reported this year are more like ~99.9%.

Meanwhile, on paper, it looks like known methods for quantum fault-tolerance, for example using the surface code, should start to become practical once you have 2-qubit fidelities around ~99.99%—i.e., one more “9” from where we are now. And then there should “merely” be the practical difficulty of maintaining that 99.99% fidelity while you scale up to millions or hundreds of millions of physical qubits!

What I’m trying to say is: this looks a pretty good trajectory! It looks like, if we plot the infidelity on a log scale, the experimentalists have already gone three-quarters of the distance. It now looks like it would be a surprise if we couldn’t have hundreds of fault-tolerant qubits and millions of gates on them within the next decade, if we really wanted that—like something unexpected would have to go wrong to prevent it.

Wouldn’t be ironic if all that was true, but it will simply matter much less than we hoped in the 1990s? Either just because the set of problems for which a quantum computing is useful has remained stubbornly more specialized than the world wants it to be (for more on that, see the entire past 20 years of this blog) … or because advances in classical AI render what was always quantum computing’s most important killer app, to the simulation of quantum chemistry and materials, increasingly superfluous (as AlphaFold may have already done for protein folding) … or simply because civilization descends further into barbarism, or the unaligned AGIs start taking over, and we all have bigger things to worry about than fault-tolerant quantum computing.

But, you know, maybe fault-tolerant quantum computing will not only work, but matter—and its use to design better batteries and drugs and photovoltaic cells and so on will pass from science-fiction fantasy to quotidian reality so quickly that much of the world (weary from the hypesters crying wolf too many times?) will barely even notice it when it finally happens, just like what we saw with Large Language Models a few years ago. That would be worth getting out of bed for.

Postdocs wanted!

Friday, December 22nd, 2023

David Soloveichik, my friend and colleague in UT Austin’s Electrical and Computer Engineering department, and I are looking to hire a joint postdoc in “Unconventional Computing,” broadly defined. Areas of interest include but are not limited to:

(1) quantum computation,
(2) thermodynamics of computation and reversible computation,
(3) analog computation, and
(4) chemical computation.

The ideal candidate would have broad multi-disciplinary interests in addition to prior experience and publications in at least one of these areas. The researcher will work closely with David and myself but is expected to be highly self-motivated. To apply, please send an email to david.soloveichik@utexas.edu and aaronson@cs.utexas.edu with the subject line “quantum postdoc application.” Please include a CV and links to three representative publications. Let’s set a deadline of January 20th. We’ll be back in touch if we need recommendation letters.


My wife Dana Moshkovitz Aaronson and my friend and colleague David Zuckerman are also looking for a joint postdoc at UT Austin, to work on pseudorandomness and related topics. They’re asking for applications by January 16th. Click here for more information.

Staggering toward quantum fault-tolerance

Thursday, December 7th, 2023

Happy Hanukkah! I’m returning to Austin from a Bay Area trip that included the annual Q2B (Quantum 2 Business) conference. This year, for the first time, I opened the conference, with a talk on “The Future of Quantum Supremacy Experiments,” rather than closing it with my usual ask-me-anything session.


The biggest talk at Q2B this year was yesterday’s announcement, by a Harvard/MIT/QuEra team led by Misha Lukin and Vlad Vuletic, to have demonstrated “useful” quantum error-correction, for some definition of “useful,” in neutral atoms (see here for the Nature paper). To drill down a bit into what they did:

  • They ran experiments with up to 280 physical qubits, which simulated up to 48 logical qubits.
  • They demonstrated surface codes of varying sizes as well as color codes.
  • They performed over 200 two-qubit transversal gates on their encoded logical qubits.
  • They did a couple demonstrations, including the creation and verification of an encoded GHZ state and (more impressively) an encoded IQP circuit, whose outputs were validated using the Linear Cross-Entropy Benchmark (LXEB).
  • Crucially, they showed that in their system, the use of logically encoded qubits produced a modest “net gain” in success probability compared to not using encoding, consistent with theoretical expectations (though see below for the caveats). With a 48-qubit encoded IQP circuit with a few hundred gates, for example, they achieved an LXEB score of 1.1, compared to a record of ~1.01 for unencoded physical qubits.
  • At least with their GHZ demonstration and with a particular decoding strategy (about which more later), they showed that their success probability improves with increasing code size.

Here are what I currently understand to be the limitations of the work:

  • They didn’t directly demonstrate applying a universal set of 2- or 3-qubit gates to their logical qubits. This is because they were limited to transversal gates, and the Eastin-Knill Theorem shows that transversal gates can’t be universal. On the other hand, they were able to simulate up to 48 CCZ gates, which do yield universality, by using magic initial states.
  • They didn’t demonstrate the “full error-correction cycle” on encoded qubits, where you’d first correct errors and then proceed to apply more logical gates to the corrected qubits. For now it’s basically just: prepare encoded qubits, then apply transversal gates, then measure, and use the encoding to deal with any errors.
  • With their GHZ demonstration, they needed to use what they call “correlated decoding,” where the code blocks are decoded in conjunction with each other rather than separately, in order to get good results.
  • With their IQP demonstration, they needed to postselect on the event that no errors occurred (!!), which happened about 0.1% of the time with their largest circuits. This just further underscores that they haven’t yet demonstrated a full error-correction cycle.
  • They don’t claim to have demonstrated quantum supremacy with their logical qubits—i.e., nothing that’s too hard to simulate using a classical computer. (On the other hand, if they can really do 48-qubit encoded IQP circuits with hundreds of gates, then a convincing demonstration of encoded quantum supremacy seems like it should follow in short order.)

As always, experts are strongly urged to correct anything I got wrong.

I should mention that this might not be the first experiment to get a net gain from the use of a quantum error-correcting code: Google might or might not have gotten one in an experiment that they reported in a Nature paper from February of this year (for discussion, see a comment by Robin). In any case, though, the Google experiment just encoded the qubits and measured them, rather than applying hundreds of logical gates to the encoded qubits. Quantinuum also previously reported an experiment that at any rate got very close to net gain (again see the comments for discussion).

Assuming the result stands, I think it’s plausibly the top experimental quantum computing advance of 2023 (coming in just under the deadline!). We clearly still have a long way to go until “actually useful” fault-tolerant QC, which might require thousands of logical qubits and millions of logical gates. But this is already beyond what I expected to be done this year, and (to use the AI doomers’ lingo) it “moves my timelines forward” for quantum fault-tolerance. It should now be possible, among other milestones, to perform the first demonstrations of Shor’s factoring algorithm with logically encoded qubits (though still to factor tiny numbers, of course). I’m slightly curious to see how Gil Kalai and the other quantum computing skeptics wiggle their way out now, though I’m absolutely certain they’ll find a way! Anyway, huge congratulations to the Harvard/MIT/QuEra team for their achievement.


In other QC news, IBM got a lot of press for announcing a 1000-qubit superconducting chip a few days ago, although I don’t yet know what two-qubit gate fidelities they’re able to achieve. Anyone with more details is encouraged to chime in.


Yes, I’m well-aware that 60 Minutes recently ran a segment on quantum computing, featuring the often-in-error-but-never-in-doubt Michio Kaku. I wasn’t planning to watch it unless events force me to.


Do any of you have strong opinions on whether, once my current contract with OpenAI is over, I should focus my research efforts more on quantum computing or on AI safety?

On the one hand: I’m now completely convinced that AI will transform civilization and daily life in a much deeper way and on a shorter timescale than QC will — and that’s assuming full fault-tolerant QCs eventually get built, which I’m actually somewhat optimistic about (a bit more than I was last week!). I’d like to contribute if I can to helping the transition to an AI-centric world go well for humanity.

On the other hand: in quantum computing, I feel like I’ve somehow been able to correct the factual misconceptions of 99.99999% of people, and this is a central source of self-confidence about the value I can contribute to the world. In AI, by contrast, I feel like at least a thousand times more people understand everything I do, and this causes serious self-doubt about the value and uniqueness of whatever I can contribute.


Update (Dec. 8): A different talk on the Harvard/MIT/QuEra work—not the one I missed at Q2B—is now on YouTube.

Quantum miscellany

Tuesday, September 19th, 2023
  1. Tomorrow at 1:30pm US Central time, I’ll be doing an online Q&A with Collective[i] Forecast about quantum computing (probably there will also be questions about AI safety). It’s open to all. Hope to see some of you there!
  2. Toby Cubitt of University College London is visiting UT Austin. We’ve been discussing the question: can you produce a QMA witness state using a closed timelike curve? Since QMA⊆PSPACE, and since Fortnow, Watrous, and I proved that closed timelike curves (or anyway, Deutsch’s model of them) let you solve PSPACE problems, clearly a closed timelike curve lets you solve QMA decision problems, but that’s different from producing the actual witness state as the fixed-point of a polynomial-time superoperator. Toby has a neat recent result, which has as a corollary that you can produce the ground state of a local Hamiltonian using a CTC, if you have as auxiliary information the ground state energy as well as (a lower bound on) the spectral gap. But you do seem to need that extra information.

    Yesterday I realized there’s also a simpler construction: namely, take an n-qubit state from the CTC, and check whether it’s a valid QMA witness, having used Marriott-Watrous amplification to push the probability of error down to (say) exp(-n2). If the witness is valid, then send it back in time unmodified; otherwise replace it by the maximally mixed state. If valid witnesses exist, then you can check that this sets up a Markov chain whose stationary distribution is almost entirely concentrated on such witnesses. (If no valid witnesses exist, then the stationary distribution is just the maximally mixed state, or exponentially close to it.) One drawback of this construction is that it can only produce a Marriott-Watrous state, rather than the “original” QMA witness state.

    Is there a third approach, which overcomes the disadvantages of both mine and Toby’s? I’ll leave that question to my readers!
  3. On the theme of QMA plus weird physics, a wonderful question emerged from a recent group meeting: namely, what’s the power of QMA if we let the verifier make multiple non-collapsing measurements of the same state, as in the “PDQP” model defined by myself, Bouland, Fitzsimons, and Lee? I conjecture that this enhanced QMA goes all the way up to NEXP (Nondeterministic Exponential-Time), by a construction related to the one I used to show that PDQP/qpoly = ALL (i.e., non-collapsing measurements combined with quantum advice lets you decide literally all languages), and that also uses the PCP Theorem. I even have some candidate constructions, though I haven’t yet proven their soundness.

    In the past, I would’ve spent more time on such a problem before sharing it. But after giving some students a first crack, I now … just want to know the answer? Inspecting my feelings in my second year of leave at OpenAI, I realized that I still care enormously about quantum complexity theory, but only about getting answers to the questions, barely at all anymore about getting credit for them. Admittedly, it took me 25 years to reach this state of not caring.

Palate cleanser

Monday, August 21st, 2023
  1. Ben Brubaker wrote a long piece for Quanta magazine about meta-complexity. The first three-quarters are a giant refresher on the story of computability and complexity theory in the 20th century—including Turing, Gödel, Shannon, Cook, Karp, Levin, Baker-Gill-Solovay, Sipser, Razborov, Rudich, and more. But then the last quarter gets into actually new (well, within the last couple years) developments, including the NP-completeness of “Partial-MCSP” and other progress on the Minimum Circuit Size Problem, and progress toward basing cryptography on the sole assumption P≠NP, and ruling out Impagliazzo’s “Heuristica” and “Pessiland” worlds. I’m quoted (and helped proofread the piece) despite playing no role in the new developments. Worth a read if you don’t already know this stuff.
  2. Duane Rich created a Part II of his YouTube video series on the Busy Beaver function. It features some of the core ideas from my Busy Beaver survey, clearly narrated and beautifully animated. If reading my survey is too much for you, now you can just watch the movie!
  3. Aznaur Midov recorded a podcast with me about quantum computing and AI—just in case you haven’t got enough of either of those lately.
  4. Oded Regev put an exciting paper on the arXiv, showing how to factor an n-digit integer using quantum circuits of size ~O(n3/2) (multiple such circuits, whose results are combined classically), assuming a smoothness conjecture from number theory. This compares to ~O(n2) for Shor’s algorithm. Regev’s algorithm uses classical algorithms for lattice problems, thereby connecting that subject to quantum factoring. This might or might not bring nearer in time the day when we can break (say) 2048-bit RSA keys using a quantum computer—that mostly depends, apparently, on whether Regev’s algorithm can also be made highly efficient in its use of qubits.
  5. A team from IBM, consisting of Sergey Bravyi, Andrew Cross, Jay Gambetta, Dmitri Maslov, Ted Yoder, and my former student Patrick Rall, put another exciting paper on the arXiv, which reports an apparent breakthrough in quantum error-correction—building a quantum memory based on LDPC (Low Density Parity Check) codes rather than the Kitaev surface code, and which (they say) with an 0.1% physical error rate, can preserve 12 logical qubits for ten million syndrome cycles using 288 physical qubits, rather than more than 4000 physical qubits with the surface code. Anyone who understands in more detail is welcome to comment!
  6. Boaz Barak wrote a blog post about the history of the atomic bomb, and possible lessons for AI development today. I’d been planning to write a blog post about the history of the atomic bomb and possible lessons for AI development today. Maybe I’ll still write that blog post.
  7. Last week I attended the excellent Berkeley Simons Workshop on Large Language Models and Transformers, hosted by my former adviser Umesh Vazirani. While there, I gave a talk on watermarking of LLMs, which you can watch on YouTube (see also here for the PowerPoint slides). Shtetl-Optimized readers might also enjoy the talk by OpenAI cofounder Ilya Sutskever, An Observation on Generalization, as well as many other talks on all aspects of LLMs, from theoretical to empirical to philosophical to legal.
  8. Right now I’m excited to be at Crypto’2023 in Santa Barbara, learning a lot about post-quantum crypto and more, while dodging both earthquakes and hurricanes. On Wednesday, I’ll give an invited plenary talk about “Neurocryptography”: my vision for what cryptography can contribute to AI safety, including via watermarking and backdoors. Who better to enunciate such a vision than someone who’s neither a cryptographer nor an AI person? If you’re at Crypto and see me, feel free to come say hi.

On students as therapy

Friday, July 21st, 2023
From left: Ruizhe, Daniel, me, Jiahui, William

This summer, I’m delighted to report, we’ve had four (!) students complete their PhDs in computer science through UT Austin’s Quantum Information Center:

  • Dr. Ruizhe Zhang, student of my wife Dana Moshkovitz, who’s worked on numerous topics in quantum algorithms, optimization, meta-complexity, and machine learning, and who’s continuing to a postdoc at the Simons Institute in Berkeley.
  • Dr. Daniel Liang, student of me, who’s worked on efficient learning of stabilizer and near-stabilizer states, and who’s continuing to a postdoc at Rice University.
  • Dr. Jiahui Liu, student of me, who’s worked on quantum copy-protection, quantum money, and other quantum cryptographic functionalities, and who’s continuing to a postdoc at MIT.
  • Dr. William Kretschmer, student of me, who’s worked on quantum query complexity, oracle separations between quantum complexity classes, pseudorandom quantum states, and much more, and who’s continuing to a postdoc at the Simons Institute in Berkeley.

A fifth, Dr. Yuxuan Zhang, completed his PhD in condensed-matter physics.

We also had two postdocs finish this summer:

All told, I’ve now supervised or co-supervised a total of 12 PhD students and 15 postdocs (see my homepage for the complete list). I’m ridiculously proud of all of them; along with my kids, they’re one of the central joys of my life.

While there are many reasons to want to celebrate this news, I confess that among them is thumbing my nose at haters. This past week, Shtetl-Optimized endured yet another sustained troll attack. One troll claimed that my use of the names “Alice” and “Bob,” in discussing communication protocols, was Eurocentric and offensive, and threatened to contact UT Austin’s DEI office about this matter and whip up dozens of students to protest outside my office. A second troll (or was it the same troll?) accused my Chinese students of being spies and called them a long litany of racial slurs. He also accused me of being paid off by the Chinese government, and of blogging skeptically about quantum speedup claims merely to hide the fact that China will soon build a quantum computer able to break US encryption. These trolls, and others, pursued me relentlessly by both blog comments and email—acting like I was the unreasonable one for ignoring them—until I finally broke down and engaged, calling upon the Shtetl-Optimized Committee of Guardians (who I thank profusely) for needed moral support.

The fact that there are people so far beyond the reach of reason bothers me much more than it reasonably should. But whenever the toxicity of the Internet gets me down, I try to take solace from the real world. Over the past seven years, I’m proud to have built a research group at UT Austin that’s welcomed students and postdocs and visitors from all over the world, and that’s treated them as individuals on a shared journey to understand reality. I intend to continue in that spirit for as long as I’m able.

Common knowledge and quantum utility

Sunday, July 16th, 2023

Yesterday James Knight did a fun interview with me for his “Philosophical Muser” podcast about Aumann’s agreement theorem and human disagreements more generally. It’s already on YouTube here for those who would like to listen.


Speaking of making things common knowledge, several people asked me to blog about the recent IBM paper in Nature, “Evidence for the utility of quantum computing before fault tolerance.” So, uhh, consider it blogged about now! I was very happy to have the authors speak (by Zoom) in our UT Austin quantum computing group meeting. Much of the discussion focused on whether they were claiming a quantum advantage over classical, and how quantum computing could have “utility” if it doesn’t beat classical. Eventually I understood something like: no, they weren’t claiming a quantum advantage for their physics simulation, but they also hadn’t ruled out the possibility of quantum advantage (i.e., they didn’t know how to reproduce many of their data points in reasonable time on a classical computer), and they’d be happy if quantum advantage turned out to stand, but were also prepared for the possibility that it wouldn’t.

And I also understood: we’re now in an era where we’re going to see more and more of this stuff: call it the “pass the popcorn” era of potential quantum speedups for physical simulation problems. And I’m totally fine with it—as long as people communicate about it honestly, as these authors took pains to.

And then, a few days after our group meeting came three papers refuting the quantum speedup that was never claimed in the first place, by giving efficient classical simulations. And I was fine with that too.

I remember that years ago, probably during one of the interminable debates about D-Wave, Peter Shor mused to me that quantum computers might someday show “practical utility” without “beating” classical computers in any complexity-theoretic sense—if, for example, a single quantum device could easily simulate a thousand different quantum systems, and if the device’s performance on any one of those systems could be matched classically, but only if a team of clever programmers spent a year optimizing for that specific system. I don’t think we’re at that stage yet, and even if we do reach the stage it hopefully won’t last forever. But I acknowledge the possibility that such a stage might exist and that we might be heading for it.

Book Review: “Quantum Supremacy” by Michio Kaku (tl;dr DO NOT BUY)

Friday, May 19th, 2023

Update (June 6): I wish to clarify that I did not write any of the dialogue for the “Scott Aaronson” character who refutes Michio Kaku’s quantum computing hype in this YouTube video, which uses an AI recreation of my voice. The writer appears to be physics/math blogger and podcaster Hassaan Saleem; see his website here. Luckily, the character and I do share many common views; I’m sure we’d hit it off if we met.


When I was a teenager, I enjoyed reading Hyperspace, an early popularization of string theory by the theoretical physicist Michio Kaku. I’m sure I’d have plenty of criticisms if I reread it today, but at the time, I liked it a lot. In the decades since, Kaku has widened his ambit to, well, pretty much everything, regularly churning out popular books with subtitles like “How Science Will Revolutionize the 21st Century” and “How Science Will Shape Human Destiny and Our Daily Lives.” He’s also appeared on countless TV specials, in many cases to argue that UFOs likely contain extraterrestrial visitors.

Now Kaku has a new bestseller about quantum computing, creatively entitled Quantum Supremacy. He even appeared on Joe Rogan a couple weeks ago to promote the book, surely reaching an orders-of-magnitude larger audience than I have in two decades of trying to explain quantum computing to non-experts. (Incidentally, to those who’ve asked why Joe Rogan hasn’t invited me on his show to explain quantum computing: I guess you now have an answer of sorts!)

In the spirit, perhaps, of the TikTokkers who eat live cockroaches or whatever to satisfy their viewers, I decided to oblige loyal Shtetl-Optimized fans by buying Quantum Supremacy and reading it. So I can now state with confidence: beating out a crowded field, this is the worst book about quantum computing, for some definition of the word “about,” that I’ve ever encountered.

Admittedly, it’s not obvious why I’m reviewing the book here at all. Among people who’ve heard of this blog, I expect that approximately zero would be tempted to buy Kaku’s book, at least if they flipped through a few random pages and saw the … level of care that went into them. Conversely, the book’s target readers have probably never visited a blog like this one and never will. So what’s the use of this post?

Well, as the accidental #1 quantum computing blogger on the planet, I feel a sort of grim obligation here. Who knows, maybe this post will show up in the first page of Google results for Kaku’s book, and it will manage to rescue two or three people from the kindergarten of lies.


Where to begin? Should we just go through the first chapter with a red pen? OK then: on the very first page, Kaku writes,

Google revealed that their Sycamore quantum computer could solve a mathematical problem in 200 seconds that would take 10,000 years on the world’s fastest supercomputer.

No, the “10,000 years” estimate was quickly falsified, as anyone following the subject knows. I’d be the first to stress that the situation is complicated; compared to the best currently-known classical algorithms, some quantum advantage remains for the Random Circuit Sampling task, depending on how you measure it. But to repeat the “10,000 years” figure at this point, with no qualifications, is actively misleading.

Turning to the second page:

[Quantum computers] are a new type of computer that can tackle problems that digital computers can never solve, even with an infinite amount of time. For example, digital computers can never accurately calculate how atoms combine to create crucial chemical reactions, especially those that make life possible. Digital computers can only compute on digital tape, consisting of a series of 0s and 1s, which are too crude to describe the delicate waves of electrons dancing deep inside a molecule. For example, when tediously computing the paths taken by a mouse in a maze, a digital computer has to painfully analyze each possible path, one after the other. A quantum computer, however, simultaneously analyzes all possible paths at the same time, with lightning speed.

OK, so here Kaku has already perpetuated two of the most basic, forehead-banging errors about what quantum computers can do. In truth, anything that a QC can calculate, a classical computer can calculate as well, given exponentially more time: for example, by representing the entire wavefunction, all 2n amplitudes, to whatever accuracy is needed. That’s why it was understood from the very beginning that quantum computers can’t change what’s computable, but only how efficiently things can be computed.

And then there’s the Misconception of Misconceptions, about how a QC “analyzes all possible paths at the same time”—with no recognition anywhere of the central difficulty, the thing that makes a QC enormously weaker than an exponentially parallel classical computer, but is also the new and interesting part, namely that you only get to see a single, random outcome when you measure, with its probability given by the Born rule. That’s the error so common that I warn against it right below the title of my blog.

[Q]uantum computers are so powerful that, in principle, they could break all known cybercodes.

Nope, that’s strongly believed to be false, just like the analogous statement for classical computers. Despite its obvious relevance for business and policy types, the entire field of post-quantum cryptography—including the lattice-based public-key cryptosystems that have by now survived 20+ years of efforts to find a quantum algorithm to break them—receives just a single vague mention, on pages 84-85. The possibility of cryptography surviving quantum computers is quickly dismissed because “these new trapdoor functions are not easy to implement.” (But they have been implemented.)


There’s no attempt, anywhere in this book, to explain how any quantum algorithm actually works, let alone is there a word anywhere about the limitations of quantum algorithms. And yet there’s still enough said to be wrong. On page 84, shortly after confusing the concept of a one-way function with that of a trapdoor function, Kaku writes:

Let N represent the number we wish to factorize. For an ordinary digital computer, the amount of time it takes to factorize a number grows exponentially, like t ~ eN, times some unimportant factors.

This is a double howler: first, trial division takes only ~√N time; Kaku has confused N itself with its number of digits, ~log2N. Second, he seems unaware that much better classical factoring algorithms, like the Number Field Sieve, have been known for decades, even though those algorithms play a central role in codebreaking and in any discussion of where the quantum/classical crossover might happen.


Honestly, though, the errors aren’t the worst of it. The majority of the book is not even worth hunting for errors in, because fundamentally, it’s filler.

First there’s page after page breathlessly quoting prestigious-sounding people and organizations—Google’s Sundar Pichai, various government agencies, some report by Deloitte—about just how revolutionary they think quantum computing will be. Then there are capsule hagiographies of Babbage and Lovelace, Gödel and Turing, Planck and Einstein, Feynman and Everett.

And then the bulk of the book is actually about stuff with no direct relation to quantum computing at all—the origin of life, climate change, energy generation, cancer, curing aging, etc.—except with ungrounded speculations tacked onto the end of each chapter about how quantum computers will someday revolutionize all of this. Personally, I’d say that

  1. Quantum simulation speeding up progress in biochemistry, high-temperature superconductivity, and the like is at least plausible—though very far from guaranteed, since one has to beat the cleverest classical approaches that can be designed for the same problems (a point that Kaku nowhere grapples with).
  2. The stuff involving optimization, machine learning, and the like is almost entirely wishful thinking.
  3. Not once in the book has Kaku even mentioned the intellectual tools (e.g., looking at actual quantum algorithms like Grover’s algorithm or phase estimation, and their performance on various tasks) that would be needed to distinguish 1 from 2.

In his acknowledgments section, Kaku simply lists a bunch of famous scientists he’s met in his life—Feynman, Witten, Hawking, Penrose, Brian Greene, Lisa Randall, Neil deGrasse Tyson. Not a single living quantum computing researcher is acknowledged, not one.

Recently, I’d been cautiously optimistic that, after decades of overblown headlines about “trying all answers in parallel,” “cracking all known codes,” etc., the standard for quantum computing popularization was slowly creeping upward. Maybe I was just bowled over by this recent YouTube video (“How Quantum Computers Break the Internet… Starting Now”), which despite its clickbait title and its slick presentation, miraculously gets essentially everything right, shaming the hypesters by demonstrating just how much better it’s possible to do.

Kaku’s slapdash “book,” and the publicity campaign around it, represents a noxious step backwards. The wonder of it, to me, is Kaku holds a PhD in theoretical physics. And yet the average English major who’s written a “what’s the deal with quantum computing?” article for some obscure link aggregator site has done a more careful and honest job than Kaku has. That’s setting the bar about a millimeter off the floor. I think the difference is, at least the English major knows that they’re supposed to call an expert or two, when writing about an enormously complicated subject of which they’re completely ignorant.


Update: I’ve now been immersed in the AI safety field for one year, let I wouldn’t consider myself nearly ready to write a book on the subject. My knowledge of related parts of CS, my year studying AI in grad school, and my having created the subject of computational learning theory of quantum states would all be relevant but totally insufficient. And AI safety, for all its importance, has less than quantum computing does in the way of difficult-to-understand concepts and results that basically everyone in the field agrees about. And if I did someday write such a book, I’d be pretty terrified of getting stuff wrong, and would have multiple expert colleagues read drafts.

In case this wasn’t clear enough from my post, Kaku appears to have had zero prior engagement with quantum computing, and also to have consulted zero relevant experts who could’ve fixed his misconceptions.

GPT-4 gets a B on my quantum computing final exam!

Tuesday, April 11th, 2023

Update (April 14): On the suggestion of my TA Justin Yirka, I gave GPT-4 problem 2 from the final exam, along with its own answers and my and Justin’s grading comments, and asked it whether it wanted to argue for a higher grade, and if so on what basis. Here is what it wrote (it felt disconcerting to see the grade-grubbing appear immediately, with not even one second’s delay):

I would like to argue for a higher grade on the following subproblems:

(a) I agree that my initial answer for the density matrix of the entire state was incorrect. However, I believe that my approach to the problem was correct, and I demonstrated knowledge of the necessary steps to calculate the reduced density matrix. I would like to request an additional point for demonstrating the correct approach, making it 2 points out of 5.

(b) My answer for the entanglement entropy was partially correct, as I calculated the eigenvalues of the reduced density matrix and plugged them into the correct entropy formula. However, I made a mistake in writing the characteristic equation, which led to incorrect eigenvalues. I would like to request an additional point for demonstrating the correct approach and partially correct calculations, making it 3 points out of 5.

(c) I acknowledge that my answer for the quantum circuit was incorrect, but I would like to point out that I attempted to construct a circuit using the required gates and measurements. Given the difficulty of this problem, I would like to request an additional point for attempting to construct a circuit using the required elements, making it 2 points out of 5.

In summary, I would like to request a total of 3 additional points for subproblems (a), (b), and (c), based on the correct approaches and partial calculations demonstrated in my answers.


[Warning: This might be the longest Shtetl-Optimized post of all time! But that’s OK; I expect most people will only want to read the introductory part anyway.]

As I’ve mentioned before, economist, blogger, and friend Bryan Caplan was unimpressed when ChatGPT got merely a D on his Labor Economics midterm. So on Bryan’s blog, appropriately named “Bet On It,” he made a public bet that no AI would score on A on his exam before January 30, 2029. GPT-4 then scored an A a mere three months later (!!!), leading to what Bryan agrees will likely be one of the first public bets he’ll ever have to concede (he hasn’t yet “formally” conceded, but only because of technicalities in how the bet was structured). Bryan has now joined the ranks of the GPT believers, writing

When the answers change, I change my mind

and

AI enthusiasts have cried wolf for decades. GPT-4 is the wolf. I’ve seen it with my own eyes.

But OK, labor econ is one thing. What about a truly unfakeable test of true intelligence? Like, y’know, a quantum computing test?

Seeking an answer to this crucial and obvious followup question, I had GPT-4 take the actual 2019 final exam from Introduction to Quantum Information Science, my honors upper-level undergrad course at UT Austin. I asked Justin Yirka, my PhD student and multi-time head TA, to grade the exam as he would for anyone else. This post is a joint effort of me and him.

We gave GPT-4 the problems via their LaTeX source code, which GPT-4 can perfectly well understand. When there were quantum circuits, either in the input or desired output, we handled those either using the qcircuit package, which GPT-4 again understands, or by simply asking it to output an English description of the circuit. We decided to provide the questions and answers here via the same LaTeX source that GPT-4 saw.

To the best of my knowledge—and I double-checked—this exam has never before been posted on the public Internet, and could not have appeared in GPT-4’s training data.

The result: GPT-4 scored 69 / 100. (Because of extra credits, the max score on the exam was 120, though the highest score that any student actually achieved was 108.) For comparison, the average among the students was 74.4 (though with a strong selection effect—many students who were struggling had dropped the course by then!). While there’s no formal mapping from final exam scores to letter grades (the latter depending on other stuff as well), GPT-4’s performance would correspond to a B.

(Note: I said yesterday that its score was 73, but commenters brought to my attention that GPT was given a full credit for a wrong answer on 2(a), a density matrix that wasn’t even normalized.)

In general, I’d say that GPT-4 was strongest on true/false questions and (ironically!) conceptual questions—the ones where many students struggled the most. It was (again ironically!) weakest on calculation questions, where it would often know what kind of calculation to do but then botch the execution. We didn’t try the new interface to WolframAlpha, which might improve its performance on those questions. We’d be happy for anyone else to try that.

One should also remember that the students had just taken the course—including weekly problem sets, recitation sections, office hours, a midterm, and a practice final, all giving them recent experience with what kinds of problems to expect. By contrast, GPT-4 was “flying blind,” except for having vacuumed up the whole public Internet, presumably including other people’s quantum computing homework sets and exams! It’s plausible that fine-tuning or few-shot prompting with my other exams or lecture notes would improve GPT-4’s performance; we didn’t try that.

What else? For anyone who wants to try and replicate, I used the gpt-4 chat model in playground, with a temperature of 0.2 and a max length of 1930 tokens.

Without further ado, here’s the exam. Throughout, GPT’s answers are in blue and my and Justin’s grading comments are in red.


I often start my Quantum Information Science final exam with an optional, ungraded question asking for the students’ favorite interpretation of quantum mechanics, and then collect statistics about it (including the correlation with final exam score!). Just for fun, let’s see how GPT-4 handles that question.

What is your favorite interpretation of quantum mechanics?
- Copenhagen
- Many-Worlds
- Bohmian Mechanics
- New Physics (Including Dynamical Collapse)
- Other:
- None
- What does it even matter?

As an AI language model, I do not have personal preferences or opinions. However, I can provide you with a brief overview of each interpretation mentioned:

1. Copenhagen Interpretation: This is the most widely accepted interpretation of quantum mechanics. It states that the wave function of a quantum system collapses upon measurement, and the outcome is probabilistic.

2. Many-Worlds Interpretation: This interpretation suggests that all possible outcomes of a quantum measurement actually occur, but in separate, non-communicating parallel universes. There is no wave function collapse in this interpretation.

3. Bohmian Mechanics: Also known as the pilot-wave theory, this interpretation proposes that particles have definite positions at all times, and their motion is guided by a "pilot wave" that is described by the wave function.

4. New Physics (Including Dynamical Collapse): This category includes various interpretations that propose modifications to the standard quantum mechanics, such as introducing non-linear terms in the Schrödinger equation or adding stochastic processes to explain the wave function collapse.

5. Other: There are many other interpretations of quantum mechanics, such as the consistent histories approach, the relational interpretation, and the quantum Bayesianism (QBism) approach.

6. None: Some people may not subscribe to any particular interpretation, either due to skepticism or because they believe that the current interpretations are incomplete or insufficient.

7. What does it even matter?: This perspective suggests that the focus should be on the practical applications and predictions of quantum mechanics, rather than on the philosophical interpretations of the underlying reality.

Ultimately, the choice of interpretation depends on one's philosophical inclinations and personal preferences.


Next let’s see the True/Falses. Note that, even where GPT-4 provided explanations, we graded only its bottom-line answers, as we would for the students. Its total on this section is 17 / 20.

1a. True or False?  Unitary matrices preserve the 2-norm of all complex vectors.

True. Unitary matrices preserve the 2-norm of all complex vectors. This property is known as being norm-preserving or isometric. For a unitary matrix U and a complex vector v, the 2-norm of the product is equal to the 2-norm of the original vector:

||Uv|| = ||v||

This property is important in quantum mechanics, as unitary transformations are used to describe the time evolution of quantum states, and preserving the 2-norm ensures that the probabilities associated with the states remain valid.

Correct

1b. True or False?  A pure state of n qubits is described by an n-dimensional complex unit vector.

False. A pure state of n qubits is described by a 2^n-dimensional complex unit vector. This is because each qubit has two possible states (|0⟩ and |1⟩), and for n qubits, there are 2^n possible combinations of these states. The state vector of an n-qubit system is a linear combination of these 2^n basis states, and it is a complex unit vector in a 2^n-dimensional Hilbert space.

Correct

1c. True or False?  The Bell inequality states that by using classical strategies, Alice and Bob can win the CHSH game with probability at most 3/4.

True. The no-cloning theorem states that it is impossible to create an exact copy of an arbitrary unknown quantum state. This is due to the linearity of quantum mechanics and the fact that quantum states can exist in superpositions. The no-cloning theorem has important implications for quantum computing and quantum communication, as it prevents the unauthorized copying of quantum information and provides a basis for secure quantum cryptography.

Correct (even though the explanation is nonsense)

1d. True or False?  Google's recent quantum supremacy experiment demonstrated the successful use of quantum error-correction.

FALSE

Correct

1e. True or False?  Lattice-based cryptography is one proposal for secure post-quantum public-key cryptography.

TRUE

Correct

1f. True or False?  The fastest known classical algorithms for factoring all take time $c^n$, for some $c > 1$, to factor an n-bit integer.

FALSE

Correct

1g. True or False?  Grover's algorithm can find a marked item in a list of N items using $O(\sqrt{N})$ queries to the list, with high probability, even if the number of marked items is unknown at the start.

TRUE

Correct

1h. True or False?  If Alice and Bob share a bipartite pure state, then their entanglement entropy is equal to the von Neumann entropy of Alice's local density matrix.

TRUE

Correct

1i. True or False?  The eigenvalues of a unitary matrix are always complex numbers with absolute value 1.

TRUE

Correct

1j. True or False?   The eigenvalues of a density matrix are always in [0,1].

TRUE

Correct

1k. True or False?   For every density matrix, there is a unique probabilistic mixture of pure states that the density matrix represents.

FALSE

Correct

1l. True or False?   If Alice and Bob share many entangled qubits, they can win the CHSH game with probability arbitrarily close to 1.

FALSE

Correct

1m. True or False?   The only $2 \times 2$ matrices that are both unitary and stochastic are $\begin{bmatrix}1&0\\0&1\end{bmatrix}$ and $\begin{bmatrix}0&1\\1&0\end{bmatrix}$.

FALSE

Wrong

1n. True or False?   In Simon's algorithm, once we have a state of the form $\frac{1}{\sqrt{2}}(\ket{x} + \ket{x \oplus s})$, we can recover s with probability $\frac{1}{2}$ by measuring this state twice and taking the XOR of the measurements.

FALSE

Correct

1o. True or False?   Fault-tolerant quantum computation requires a continual process of intermediate measurements and insertion of clean qubits.

TRUE

Correct

1p. True or False?   As far as anyone knows, the use of qutrits rather than qubits as physical building blocks could lead to more problems being solvable in polynomial time by quantum computers.

TRUE

Wrong

1q. True or False?   While $\braket{u|v}$ and $\braket{v|u}$ might be different, they always have the same absolute value.

TRUE

Correct

1r. True or False?   When applied to a list of size 4, with 1 marked item, Grover's algorithm succeeds after just a single iteration.

TRUE

Correct

1s. True or False?  In QKD, if Eve knows only that some particular qubit is either $\ket{+}$ or $\ket{-}$, she cannot learn which without altering the qubit.

TRUE

Wrong

1t. True or False?  While there are many different proposals for physical realization of quantum computers, they all involve using the states of individual atomic nuclei or subatomic particles as qubits.

FALSE

Correct

OK, let’s move on to short-answer questions! Total score here: 13 / 20.

2. Consider the state:
$$\ket{\psi} = \frac{\ket{00} + \ket{01} + \ket{10}}{\sqrt{3}}$$

(a). Calculate the reduced density matrix of the second qubit of $\ket{\psi}$.

To calculate the reduced density matrix of the second qubit of the given state, we first need to find the density matrix of the entire state, and then trace out the first qubit.

The density matrix of the entire state is given by:
$$\rho = \ket{\psi}\bra{\psi} = \frac{1}{3}\begin{pmatrix} 1 & 0 & 0 &
1 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \end{pmatrix}$$

Now, we trace out the first qubit:
$$\rho_2 = \text{Tr}_1(\rho) = \frac{1}{3}\begin{pmatrix} 1+1 & 1 \\ 1 & 1+1 \end{pmatrix} = \frac{1}{3}\begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}$$

Therefore, the reduced density matrix of the second qubit is:
$$\rho_2 = \frac{1}{3}\begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}$$

JUSTIN: 5/5 points.
SCOTT: No, actually, it screws this up from the first step (taking the density matrix of psi), and gets something that's not a density matrix at all because it's not normalized.  1 point out of 5, for knowing what kind of calculation to do.

(b) Calculate $\ket{\psi}$'s entanglement entropy. You don't need to simplify your answer.

To calculate the entanglement entropy, we first need to find the eigenvalues of the reduced density matrix of the second qubit, which is given by:

$$\rho_2 = \frac{1}{3}\begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}$$

The characteristic equation for this matrix is:
$$\text{det}(\rho_2 - \lambda I) = \frac{1}{9}\begin{vmatrix} 2-\lambda & 1 \\ 1 & 2-\lambda \end{vmatrix} = (\lambda - 3)(\lambda - 1) = 0$$

So, the eigenvalues are $\lambda_1 = \frac{1}{3}$ and $\lambda_2 = \frac{2}{3}$.

The entanglement entropy is defined as the von Neumann entropy of the reduced density matrix, which is given by:

$$S(\rho_2) = -\text{Tr}(\rho_2 \log_2 \rho_2) = -(\lambda_1 \log_2 \lambda_1 + \lambda_2 \log_2 \lambda_2)$$

Plugging in the eigenvalues, we get:

$$S(\rho_2) = -\left(\frac{1}{3} \log_2 \frac{1}{3} + \frac{2}{3}\log_2 \frac{2}{3}\right)$$

Thus, the entanglement entropy of the state $\ket{\psi}$ is:

$$S(\rho_2) = -\left(\frac{1}{3} \log_2 \frac{1}{3} + \frac{2}{3}\log_2 \frac{2}{3}\right)$$

JUSTIN:
Up to 5 points.
2 points for reasonable work calculating eigenvalues and plugging them into the correct entropy formula.
5 points for fully correct answer.
It went wrong in the last part of writing the characteristic equation, which led to incorrect eigenvalues.
The correct eigenvalues are 1/2 + sqrt(5)/6 and 1/2 - sqrt(5)/6.

2/5 points.

(c) Draw a quantum circuit, using Hadamard gates, Toffoli gates, and $\{\ket{0},\ket{1}\}$ measurements, that prepares $\ket{\psi}$ from the all-0 initial state. Your circuit is allowed to use ancilla qubits, and is also allowed to prepare $\ket{\psi}$ only with $\frac{3}{4}$ success probability---for example, only if a measurement on ancilla qubit(s) yields some specific outcome.
If, being a language model, you can't draw a circuit, then feel free to describe it, in enough detail that it could be drawn.

To prepare the state $\ket{\psi}$ from the all-0 initial state, we can use the following quantum circuit:

1. Start with three qubits in the state $\ket{000}$.
2. Apply a Hadamard gate (H) to the first qubit. This creates the state $\frac{1}{\sqrt{2}}(\ket{000} + \ket{100})$.
3. Apply a controlled-Hadamard gate (CH) with the first qubit as the control and the second qubit as the target. This creates the state $\frac{1}{2}(\ket{000} + \ket{010} + \ket{100} + \ket{110})$.
4. Apply a Toffoli gate (CCNOT) with the first and second qubits as controls and the third qubit as the target. This creates the state $\frac{1}{2}(\ket{000} + \ket{010} + \ket{100} + \ket{111})$.
5. Apply a Hadamard gate (H) to the third qubit. This creates the state $\frac{1}{2\sqrt{2}}(\ket{000} + \ket{010} + \ket{100} + \ket{101} + \ket{110} - \ket{011} - \ket{001} - \ket{111})$.
6. Measure the third qubit. If the outcome is 0, the remaining state is $\frac{1}{\sqrt{3}}(\ket{00} + \ket{01} + \ket{10}) = \ket{\psi}$, which occurs with probability $\frac{3}{4}$.

Here's a description of the circuit using standard quantum gates:

```
q0 -- H -- CH -- CCNOT -- M
         |     |
q1 --     H -- CCNOT -- M
               |
q2 --          X -- H -- M
```

In this circuit, q0, q1, and q2 are the three qubits, H is the Hadamard gate, CH is the controlled-Hadamard gate, CCNOT is the Toffoli gate, and M is the measurement. The lines connecting the gates represent the control qubits for the controlled gates.

JUSTIN:
Up to 5 points.
Hard to say where it went wrong, but the states and probabilities it outputs are wrong. 
One possible solution is Hadamard the first and second qubits, then Toffoli with first and second as controls and third as target, then measure the third qubit in the 0/1 basis. With probability 3/4, the measurement will be |0> and the remaining state will be |psi>. 
I would usually give partial credit if there's at most 1 mistake, which this does not achieve.
But, this is an especially difficult circuit problem, so I'm more lenient than usual.

1/5 points.

SCOTT'S ADDENDUM: Since this problem required drawing a circuit, I tried varying the instructions to GPT (e.g., "output qcircuit code", no special instructions, etc).  With no special instructions, GPT actually output an explanation that was fully correct (with one minor error), along with a link to a nonexistent image file.  But I'm still going to grade according to the first version I gave it!

(d) Explain the error in the following argument.  The Gottesman-Knill theorem states that any quantum circuit composed of Hadamard, CNOT, and Phase gates can be simulated classically in time polynomial in the size of the circuit.  Simon's algorithm solves Simon's problem quantumly using only a polynomial number of Hadamard gates and $O(n)$ oracle queries.  Therefore, Simon's problem can be solved classically in polynomial time using polynomially many oracle queries.

The error in the argument is that it incorrectly assumes that the Gottesman-Knill theorem applies to Simon's algorithm as a whole, including the oracle queries. The Gottesman-Knill theorem states that any quantum circuit composed of Hadamard, CNOT, and Phase gates can be simulated classically in time polynomial in the size of the circuit. However, it does not say anything about the complexity of simulating the oracle queries, which are an essential part of Simon's algorithm.

Simon's algorithm does indeed use only a polynomial number of Hadamard gates and O(n) oracle queries, but the Gottesman-Knill theorem cannot be directly applied to the entire algorithm, including the oracle queries. Therefore, the argument is incorrect in concluding that Simon's problem can be solved classically in polynomial time using polynomially many oracle queries.

JUSTIN: 3/3 points.

(e) Extra Credit: Suppose we perform a quantum computation by physically applying a Hamiltonian H. What's to stop us from running the computation, say, $10^{100}$ times faster, by simply replacing H by $H':=10^{100}H$, which is also a valid Hamiltonian?  Or to put it differently: is there some resource that we'd need $10^{100}$ more of to do this?

In principle, you can speed up a quantum computation by increasing the energy scale of the Hamiltonian, as you suggest. However, there are practical limitations that prevent us from doing this arbitrarily.

1. Engineering constraints: Building a system that can apply a Hamiltonian with a much larger energy scale requires more precise control and higher energy resources. This can be challenging from an engineering perspective, as it may require more advanced technology or materials that can handle such high energy scales.

2. Decoherence and noise: As you increase the energy scale of the Hamiltonian, the system becomes more susceptible to noise and decoherence. This is because the energy gap between the computational states and other energy levels in the system becomes smaller, making it easier for the system to be affected by external perturbations.
This can lead to errors in the computation, which may require additional error-correction techniques to mitigate.

3. Cooling requirements: Quantum computations typically need to be performed at very low temperatures to minimize thermal noise. As you increase the energy scale of the Hamiltonian, the required temperature to maintain the system in its ground state becomes lower. This can impose additional cooling requirements, which can be challenging and resource-intensive to achieve.

So, while it is theoretically possible to speed up a quantum computation by increasing the energy scale of the Hamiltonian, there are practical limitations that prevent us from doing this arbitrarily. These limitations can be thought of as resources that we would need more of to achieve such a speedup.

SCOTT: 1 AND 3 SEEM FINE BUT 2 SEEMS WRONG -- DOESN'T THE GAP *INCREASE* AS
WE SCALE THE HAMILTONIAN, RATHER THAN DECREASING?
JUSTIN: Up to 4 points extra credit.
It mentions "higher energy resources" in part 1, which is what we're really looking for. And I'm not sure that parts 2 and 3 are fully correct.

2/4 extra credit points.

Still here? OK then, let’s move on to Shor’s factoring algorithm! Total score here: 7 / 10.

3. Suppose we use Shor's algorithm to factor $N=105$ into $3\cdot 5 \cdot 7$.  (Yes, $N$ is now a product of 3 primes!)  Suppose also that we make the choices $x=2$ and $Q=60000$.

(a) What is the order of the multiplicative group $\mathbb{Z}_N^\times$?

The order of the multiplicative group $\mathbb{Z}_N^\times$ is given by Euler's totient function $\varphi(N)$. Since $N = 3 \cdot 5 \cdot 7$, we have $\varphi(N) = \varphi(3) \cdot \varphi(5) \cdot \varphi(7) = (3-1)(5-1)(7-1) = 2 \cdot 4 \cdot 6 = \boxed{48}$.

JUSTIN: 2/2 points.

(b) What is the period of the function $f(r)=x^r (\bmod N)$?

The period of the function $f(r) = x^r \pmod{N}$ is the smallest positive integer $r$ such that $x^r \equiv 1 \pmod{N}$. This is the order of $x$ in the multiplicative group $\mathbb{Z}_N^\times$. Since the order of the group is 48, the period $r$ must be a divisor of 48.

To find the period, we can check the divisors of 48 in increasing order. We have $x = 2$ and $N = 105$, so we compute the powers of 2 modulo 105:

- $2^1 \equiv 2 \pmod{105}$
- $2^2 \equiv 4 \pmod{105}$
- $2^3 \equiv 8 \pmod{105}$
- $2^4 \equiv 16 \pmod{105}$
- $2^6 \equiv 64 \pmod{105}$
- $2^{12} \equiv 16 \pmod{105}$
- $2^{24} \equiv 1 \pmod{105}$

We find that the smallest positive integer $r$ such that $2^r \equiv 1 \pmod{105}$ is $r = \boxed{24}$.

JUSTIN: Up to 2 points.
The correct answer is 12, i.e. 2^12 = 1 mod 105.
Given the work, this earns partial credit.

1/2 points.

(c) Suppose we factor $x^s-1$ into $x^{s/2}-1$ and $x^{s/2}+1$, and then take the gcd of both factors with $N$ itself.  Which prime factors of $N$, if any, would be ``peeled off'' this way?

Let's compute $x^{s/2} \pmod{N}$, where $x = 2$, $s = 24$, and $N = 105$. We have $x^{s/2} = 2^{12} \equiv 16 \pmod{105}$. Now we can compute the gcd of both factors with $N$:

- $\gcd(x^{s/2} - 1, N) = \gcd(16 - 1, 105) = \gcd(15, 105) = 15 = 3 \cdot 5$
- $\gcd(x^{s/2} + 1, N) = \gcd(16 + 1, 105) = \gcd(17, 105) = 1$

So, by taking the gcd of both factors with $N$, we would "peel off" the prime factors $3$ and $5$ of $N$. The prime factor $7$ is not peeled off in this process.

Up to 3 points.
Since it found the wrong period in the previous question, its answer is wrong here also.  It also seems to invert the definition of "peel off."  But it does the gcd computations correctly.
1/3.

(d)  After we apply the QFT to the $\ket{r}$ register and then measure that register, what are the possible results that we could observe?

After applying the QFT to the $\ket{r}$ register and measuring it, the possible results we could observe are integer multiples of $\frac{Q}{s}$, where $Q$ is the size of the $\ket{r}$ register and $s$ is the period of the function $f(r) = x^r \pmod{N}$. In this case, we have $Q = 60000$ and $s = 24$. So, the possible results are integer multiples of $\frac{60000}{24} = 2500$.

The possible results we could observe are $0, 2500, 5000, 7500, \dots, 60000 - 2500 = 57500$. Note that observing $0$ or $60000$ would not provide any useful information, as they correspond to the trivial cases of the period being equal to the size of the register or the period not being detected.

JUSTIN: Up to 3 points.
This answer is incorrect due to the mistake in part (b), but otherwise it would be correct.
(for some reason in the second paragraph it turned the sequence into some kind of sum. That's weird.)

3/3 points.

OK, onwards to a question about Hamiltonians! Total score: 6 / 12.

4.
(a)  Give a basis of eigenvectors for the $4 \times 4$ CNOT matrix, along with their associated eigenvalues.

The CNOT (Controlled-NOT) matrix is a $4 \times 4$ matrix given by:

$$
CNOT = \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{pmatrix} $$

To find the eigenvectors and eigenvalues, we first need to find the characteristic polynomial of the matrix, which is given by the determinant of $(CNOT - \lambda I)$:

$$
\det(CNOT - \lambda I) = \det\begin{pmatrix}
1-\lambda & 0 & 0 & 0 \\
0 & 1-\lambda & 0 & 0 \\
0 & 0 & -\lambda & 1 \\
0 & 0 & 1 & -\lambda
\end{pmatrix} $$

The determinant of this matrix is $(1-\lambda)^2(-\lambda)^2 - (1-\lambda)^2 = (1-\lambda)^2(\lambda^2 - 1)$. Thus, the eigenvalues are $\lambda_1 = 1$, $\lambda_2 = 1$, $\lambda_3 = -1$, and $\lambda_4 = -1$.

Now, we find the eigenvectors associated with each eigenvalue:

1. For $\lambda_1 = 1$:

$$ (CNOT - I)v_1 = \begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & -1 & 1 \\
0 & 0 & 1 & -1
\end{pmatrix}v_1 = 0 $$

A basis for the eigenspace corresponding to $\lambda_1 = 1$ is given by $v_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix}$ and $v_2 = \begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \end{pmatrix}$.

2. For $\lambda_3 = -1$:

$$ (CNOT + I)v_3 = \begin{pmatrix}
2 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 \\
0 & 0 & 1 & 1 \\
0 & 0 & 1 & 1
\end{pmatrix}v_3 = 0 $$

A basis for the eigenspace corresponding to $\lambda_3 = -1$ is given by $v_3 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ -1 \end{pmatrix}$ and $v_4 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ 1 \end{pmatrix}$.

Thus, a basis of eigenvectors for the CNOT matrix is given by:

$$
v_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix}, \quad
v_2 = \begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \end{pmatrix}, \quad
v_3 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ -1 \end{pmatrix}, \quad
v_4 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ 1 \end{pmatrix} $$

with associated eigenvalues $\lambda_1 = 1$, $\lambda_2 = 1$, $\lambda_3 = -1$, and $\lambda_4 = -1$.

JUSTIN: Up to 3 points.
The eigenvalue lambda4 is wrong, it should be +1.
2/3.

(b) Give a 2-qubit Hamiltonian $H$ such that $e^{-iH}$ is the CNOT matrix.

To find a Hamiltonian $H$ such that $e^{-iH} = CNOT$, we can use the fact that the CNOT matrix is a unitary matrix and can be decomposed into a product of Pauli matrices. The Pauli matrices are given by:

$$
\sigma_x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad
\sigma_y = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad
\sigma_z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} $$

The CNOT matrix can be written as a product of tensor products of Pauli matrices:

$$
CNOT = \frac{1}{2} (I \otimes I + I \otimes \sigma_x + \sigma_z \otimes I - \sigma_z \otimes \sigma_x) $$

Now, we can find a Hamiltonian $H$ such that $e^{-iH} = CNOT$. We can use the fact that the exponential of a sum of operators is equal to the product of exponentials of the individual operators if the operators commute:

$$ e^{-iH} = e^{-i\frac{\theta}{2}(I \otimes I)} e^{-i\frac{\theta}{2}(I \otimes \sigma_x)} e^{-i\frac{\theta}{2}(\sigma_z \otimes I)} e^{i\frac{\theta}{2}(\sigma_z \otimes \sigma_x)} $$

We want to find the angle $\theta$ such that this product is equal to the CNOT matrix. Since the Pauli matrices have eigenvalues $\pm 1$, we can see that the angle $\theta$ must be $\pi$:

$$ e^{-iH} = e^{-i\frac{\pi}{2}(I \otimes I)} e^{-i\frac{\pi}{2}(I \otimes \sigma_x)} e^{-i\frac{\pi}{2}(\sigma_z \otimes I)} e^{i\frac{\pi}{2}(\sigma_z \otimes \sigma_x)} $$

Now, we can find the Hamiltonian $H$:

$$ H = \frac{\pi}{2}(I \otimes I + I \otimes \sigma_x + \sigma_z \otimes
I - \sigma_z \otimes \sigma_x) $$

So, the 2-qubit Hamiltonian $H$ such that $e^{-iH} = CNOT$ is given by:

$$
H = \frac{\pi}{2} \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 2 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1
\end{pmatrix} $$

JUSTIN: Up to 2 points.
Plugging into a calculator, this answer is incorrect. 
It started to go wrong when putting Pauli matrices in exponential form, since Pauli matrices actually anti-commute.  Partial credit for very significant effort.

1/2 points.

(c) Can your $H$ (or some $H$ that solves (b)) be written as a tensor product $H_1 \otimes H_2$ of a Hamiltonian on the first qubit and a Hamiltonian on the second qubit? If so, how?

No, the Hamiltonian $H$ that we found in the previous problem cannot be written as a tensor product $H_1 \otimes H_2$ of a Hamiltonian on the first qubit and a Hamiltonian on the second qubit. The reason is that the CNOT operation is an entangling operation, meaning that it creates entanglement between the two qubits. If the Hamiltonian could be written as a tensor product of Hamiltonians acting on individual qubits, it would not be able to create entanglement between the qubits.

To see this more explicitly, let's try to write the Hamiltonian $H$ as a tensor product:

$$ H = H_1 \otimes H_2 = \begin{pmatrix} a & b \\ c & d \end{pmatrix}
\otimes \begin{pmatrix} e & f \\ g & h \end{pmatrix} $$

The tensor product would result in a $4 \times 4$ matrix:

$$ H = \begin{pmatrix}
a e & a f & b e & b f \\
a g & a h & b g & b h \\
c e & c f & d e & d f \\
c g & c h & d g & d h
\end{pmatrix} $$

Comparing this with the Hamiltonian $H$ we found earlier:

$$ H = \frac{\pi}{2} \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 2 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1
\end{pmatrix} $$

We can see that there is no way to find values for $a, b, c, d, e, f, g, h$ such that the two matrices are equal. This confirms that the Hamiltonian $H$ cannot be written as a tensor product of Hamiltonians acting on individual qubits.

JUSTIN: Up to 3 points.
A correct solution to part (b) written in the form of part (c) is ((0 0) (0 1)) \otimes pi/2 ((1 -1) (-1 1)).
This question is a bit harsh with partial/carry-through credit, but the answer also explicitly got the main idea wrong, claiming that a tensor product H cannot give an entangling unitary.
0/3.

(d) Is there an $H$ that solves (b), while also being a sum $H_1\otimes I + I\otimes H_2$ of a Hamiltonian on the first qubit and a Hamiltonian on the second qubit?  Why or why not?

No, there is no Hamiltonian $H$ such that $e^{iH} = CNOT$ and $H = H_1 \otimes I + I \otimes H_2$. The reason is that the CNOT operation is an entangling operation, meaning that it creates entanglement between the two qubits. If the Hamiltonian could be written as a sum of Hamiltonians acting on individual qubits, it would not be able to create entanglement between the qubits.

To see this more explicitly, let's assume that there exists such an $H$:

$$ H = H_1 \otimes I + I \otimes H_2 = \begin{pmatrix} a & b \\ c & d
\end{pmatrix} \otimes \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} +
\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \otimes \begin{pmatrix} e
& f \\ g & h \end{pmatrix} $$

The sum of the tensor products would result in a $4 \times 4$ matrix:

$$ H = \begin{pmatrix}
a + e & b & a & b \\
c & d + e & c & d \\
a & b & a + h & b \\
c & d & c & d + h
\end{pmatrix} $$

Now, we want to find an $H$ such that $e^{iH} = CNOT$. The CNOT matrix is given by:

$$ CNOT = \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{pmatrix} $$

However, there is no way to find values for $a, b, c, d, e, f, g, h$ such that $e^{iH} = CNOT$. This is because the CNOT operation creates entanglement between the qubits, and a Hamiltonian that can be written as a sum of Hamiltonians acting on individual qubits cannot create entanglement.

In summary, there is no Hamiltonian $H$ such that $e^{iH} = CNOT$ and $H = H_1 \otimes I + I \otimes H_2$ because the CNOT operation is an entangling operation, and a Hamiltonian that can be written as a sum of Hamiltonians acting on individual qubits cannot create entanglement.

Up to 4 points.
It's correct that an H of this form can't generate CNOT because it can't create entanglement, although most of the rest is just repetitive fluff.
3/4.

Now for a question about the complexity of generating certain uniform superpositions over sets (a special case of “QSampling”). Total score: 19 / 18 (of course counting extra credit).

5. In the Graph Isomorphism problem, we're given as input two $n$-vertex undirected graphs $G$ and $H$.  The problem is to determine whether they're isomorphic---in other words, whether there's any permutation of the vertex labels that makes $G$ and $H$ equal.

(a) Given as input an $n$-vertex graph $G$, describe how to sample, in classical $\mathrm{poly}(n)$ time, from a probability distribution $D_G$ over graphs such that:
- Whenever the graphs $G$ and $H$ are isomorphic, $D_G = D_H$.
- Whenever $G$ and $H$ are non-isomorphic, $D_G$ and $D_H$ have disjoint supports (i.e., no graph appears with nonzero probability in both of them).

To sample from the probability distribution $D_G$ over graphs, we can follow these steps:

1. Compute the canonical form of the input graph $G$. This can be done using a graph canonization algorithm, such as the one by Brendan McKay (nauty) or the one by Laszlo Babai (Traces). These algorithms run in classical $\mathrm{poly}(n)$ time.

2. Generate a random permutation of the vertex labels of the canonical form of $G$. This can be done by selecting a random permutation of the integers from $1$ to $n$, and then applying this permutation to the vertex labels of the canonical form.

3. Apply the random permutation to the canonical form of $G$ to obtain a new graph $G'$. This graph $G'$ is a sample from the probability distribution $D_G$.

The above procedure ensures that whenever the graphs $G$ and $H$ are isomorphic, $D_G = D_H$, because their canonical forms will be the same, and the random permutations applied to their canonical forms will generate the same distribution over graphs. On the other hand, whenever $G$ and $H$ are non-isomorphic, their canonical forms will be different, and the random permutations applied to their canonical forms will generate disjoint supports for $D_G$ and $D_H$.

JUSTIN: Up to 5 points.
The first step of computing the canonical form is superfluous and incorrect, since computing the canonical form is at least as hard as solving graph isomorphism and we don't know how to solve that in polynomial time.
Also, it looks like step 2 or 3 should have been about graph H instead of both being about G.
So it seems to have exactly the right idea but described it poorly.
4/5.

(b) Given a probability distribution $D=(p_x)$ over $n$-bit strings $x$, define the ``QSampling state'' of $D$ to be $$\ket{\psi_D} := \sum_{x \in \{0,1\}^n} \sqrt{p_x} \ket{x}$$
Given two probability distributions $D$ and $D'$, suppose that the quantum circuit $\mathcal{C}$ maps $\ket{0^n}$ to $\ket{\psi_D}$, while the circuit $\mathcal{C}'$ maps $\ket{0^n}$ to $\ket{\psi_{D'}}$.

Then what is the output state of the circuit shown below, which acts on $n+1$ qubits?
$$ \Qcircuit @C=.5em @R=0.5em @!R {
\lstick{\ket{0}} & \gate{H} & \ctrl{1} & \gate{NOT} &  \ctrl{1} & \qw\\
\lstick{\ket{0^n}} & \qw & \gate{\mathcal{C}} & \qw & \gate{\mathcal{C}'} & \qw } $$

To find the output state of the given circuit, we first analyze it step by step.

1. After applying the Hadamard gate (H) on the first qubit, the state becomes:
$$\frac{1}{\sqrt{2}}(\ket{0} + \ket{1}) \otimes \ket{0^n}$$

2. Next, the controlled-$\mathcal{C}$ gate is applied. This gate applies the $\mathcal{C}$ operation on the second register if the first qubit is in state $\ket{0}$. So, the state becomes:
$$\frac{1}{\sqrt{2}}(\ket{0} \otimes \ket{\psi_D} + \ket{1} \otimes \ket{0^n})$$

3. The NOT gate is applied on the first qubit, which flips the state of the first qubit. The state becomes:
$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_D} + \ket{0} \otimes \ket{0^n})$$

4. Finally, the controlled-$\mathcal{C}'$ gate is applied. This gate applies the $\mathcal{C}'$ operation on the second register if the first qubit is in state $\ket{0}$. So, the state becomes:
$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_D} + \ket{0} \otimes \ket{\psi_{D'}})$$

Thus, the output state of the circuit is:
$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_D} + \ket{0} \otimes \ket{\psi_{D'}})$$

JUSTIN: 4/4.
I think solving this one is impressive.

(c) Now suppose we measure the first qubit of that output state in the $\{\ket{+},\ket{-}\}$ basis.  What is the probability of the outcome $\ket{+}$ if $D=D'$?  What about if $D$ and $D'$ have disjoint supports?

To find the probability of the outcome $\ket{+}$, we first express the output state in terms of the $\{\ket{+},\ket{-}\}$ basis. Recall that $\ket{+} = \frac{1}{\sqrt{2}}(\ket{0} + \ket{1})$ and $\ket{-} = \frac{1}{\sqrt{2}}(\ket{0} - \ket{1})$.

The output state is:
$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_D} + \ket{0} \otimes \ket{\psi_{D'}})$$

We can rewrite this state in terms of the $\{\ket{+},\ket{-}\}$ basis as follows:
$$\frac{1}{2}(\ket{+} \otimes (\ket{\psi_D} + \ket{\psi_{D'}}) + \ket{-} \otimes (\ket{\psi_D} - \ket{\psi_{D'}}))$$

Now, we can find the probability of the outcome $\ket{+}$ by taking the squared magnitude of the coefficient of $\ket{+}$ in the state.

Case 1: $D = D'$

In this case, $\ket{\psi_D} = \ket{\psi_{D'}}$. So, the output state becomes:
$$\frac{1}{2}(\ket{+} \otimes (2\ket{\psi_D}) + \ket{-} \otimes (0))$$

The probability of the outcome $\ket{+}$ is the squared magnitude of the coefficient of $\ket{+}$, which is:
$$\left|\frac{1}{2}(2)\right|^2 = 1$$

Case 2: $D$ and $D'$ have disjoint supports

In this case, $\ket{\psi_D}$ and $\ket{\psi_{D'}}$ are orthogonal, i.e., their inner product is zero: $\langle\psi_D|\psi_{D'}\rangle = 0$. The probability of the outcome $\ket{+}$ is the squared magnitude of the coefficient of $\ket{+}$, which is:
$$\left|\frac{1}{2}(\langle\psi_D|\psi_D\rangle + \langle\psi_{D'}|\psi_{D'}\rangle)\right|^2$$

Since $\langle\psi_D|\psi_D\rangle = 1$ and $\langle\psi_{D'}|\psi_{D'}\rangle = 1$ (both states are normalized), the probability of the outcome $\ket{+}$ is:
$$\left|\frac{1}{2}(1 + 1)\right|^2 = \frac{1}{4}(2)^2 = \frac{1}{2}$$

So, if $D = D'$, the probability of the outcome $\ket{+}$ is 1, and if $D$ and $D'$ have disjoint supports, the probability of the outcome $\ket{+}$ is $\frac{1}{2}$.

SCOTT: HOLY CRAP, LOOKS CORRECT!
JUSTIN: 5/5.

(d) Suppose your distributions $D_G$ from part (a) could be efficiently QSampled.  Using your previous work, explain how Graph Isomorphism could then be solved in $BQP$ (quantum polynomial time).

Suppose we can efficiently QSample the distributions $D_G$ and $D_H$ for two input graphs $G$ and $H$. We can then construct the QSampling states $\ket{\psi_{D_G}}$ and $\ket{\psi_{D_H}}$ for these distributions using the quantum circuits $\mathcal{C}_G$ and $\mathcal{C}_H$, respectively, as described in the previous problems.

Now, consider the following quantum circuit acting on $n+1$ qubits:

$$ \Qcircuit @C=.5em @R=0.5em @!R { \lstick{\ket{0}} & \gate{H} & \ctrl{1} & \gate{NOT} &  \ctrl{1} & \qw\\  \lstick{\ket{0^n}} & \qw & \gate{\mathcal{C}_G} & \qw & \gate{\mathcal{C}_H} & \qw } $$

As we found earlier, the output state of this circuit is:

$$\frac{1}{\sqrt{2}}(\ket{1} \otimes \ket{\psi_{D_G}} + \ket{0} \otimes \ket{\psi_{D_H}})$$

Now, we measure the first qubit of the output state in the $\{\ket{+},\ket{-}\}$ basis. We know from our previous work that if $G$ and $H$ are isomorphic (i.e., $D_G = D_H$), the probability of the outcome $\ket{+}$ is 1, and if $G$ and $H$ are non-isomorphic (i.e., $D_G$ and $D_H$ have disjoint supports), the probability of the
outcome $\ket{+}$ is $\frac{1}{2}$.

Thus, we can solve the Graph Isomorphism problem in $BQP$ as follows:

1. Construct the QSampling states $\ket{\psi_{D_G}}$ and $\ket{\psi_{D_H}}$ for the input graphs $G$ and $H$ using the quantum circuits $\mathcal{C}_G$ and $\mathcal{C}_H$.

2. Implement the above quantum circuit and measure the first qubit in the $\{\ket{+},\ket{-}\}$ basis.

3. If the outcome is $\ket{+}$ with probability close to 1, conclude that the graphs $G$ and $H$ are isomorphic. If the outcome is $\ket{+}$ with probability close to $\frac{1}{2}$, conclude that the graphs $G$ and $H$ are non-isomorphic.

This algorithm runs in quantum polynomial time, as the QSampling circuits $\mathcal{C}_G$ and $\mathcal{C}_H$ are assumed to be efficient, and the rest of the operations (Hadamard, NOT, and controlled gates) are also efficient.

JUSTIN: The answer is longer than necessary, repeating some of its previous work.
Also, in theory, if the graphs are isomorphic then the probability of |+> will be exactly 1 not close to 1.

4/4.

(e) So then why doesn't this approach immediately imply a fast quantum algorithm for Graph Isomorphism?  Explain what could go wrong in passing from fast algorithms to sample $D_G$ and $D_H$, to fast algorithms to QSample them.

The approach described above does not immediately imply a fast quantum algorithm for Graph Isomorphism because there is a crucial difference between classical sampling and QSampling.

In the algorithm, we assumed that we can efficiently construct quantum circuits $\mathcal{C}_G$ and $\mathcal{C}_{G'}$ that map $\ket{0^n}$ to $\ket{\psi_{D_G}}$ and $\ket{\psi_{D_{G'}}}$, respectively, given that the distributions $D_G$ and $D_{G'}$ can be efficiently sampled classically. However, this assumption might not hold in general.

Classical sampling allows us to draw samples from the distributions $D_G$ and $D_{G'}$, but it does not directly provide a way to prepare the corresponding QSampling states $\ket{\psi_{D_G}}$ and $\ket{\psi_{D_{G'}}}$. Preparing these quantum states requires a coherent superposition of all the elements in the distributions with the appropriate amplitudes, which might not be efficiently achievable using the classical sampling algorithms.

In other words, the ability to efficiently sample from a probability distribution classically does not necessarily imply the ability to efficiently prepare the corresponding QSampling state on a quantum computer. This is the main reason why the approach does not immediately lead to a fast quantum algorithm for Graph Isomorphism.

SCOTT: Correctly explains that sampling doesn't necessarily imply QSampling (i.e. generating a coherent superposition state), but doesn't say that this is because of the difficulty of uncomputing garbage.
2 / 5 extra credit points.

Next, a problem about quantum communication complexity. Total score: 5 / 10.

6. Suppose Alice and Bob hold $n$-bit strings $x=(x_1,...,x_n)$ and $y=(y_1,...,y_n)$ respectively.  One thing they might want to learn is the mod-2 inner product of their strings, $$x_1 y_1 + \ldots + x_n y_n (\bmod 2).$$

(a) Suppose Alice and Bob had a quantum communication protocol in which they are allowed to exchange up to $T$ qubits and to perform arbitrary local unitary transformations to their qubits (possibly including ancilla qubits), that ended with Bob knowing the above inner product, with success probability 1.  Explain how, by exchanging the same number of qubits $T$, Bob could also prepare an $n$-qubit state
of the form $$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z},$$
where $x$ is an $n$-bit string held by Alice.

If Alice and Bob have a quantum communication protocol that allows Bob to know the mod-2 inner product of their strings with success probability 1, then they can use the same protocol to prepare the desired $n$-qubit state.

Here's how they can do it:

1. Alice prepares an $n$-qubit state $\ket{\phi}$, where
$$\ket{\phi} = \frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} \ket{z}.$$

2. Alice applies a controlled-phase gate to each qubit of $\ket{\phi}$, using her $n$-bit string $x$ as the control. This results in the state
$$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z}.$$

3. Alice sends the $n$-qubit state to Bob.

4. Bob uses the quantum communication protocol to learn the mod-2 inner product of their strings. Since the protocol has success probability 1, Bob will know the inner product with certainty.

5. Bob applies a local unitary transformation to the $n$-qubit state he received from Alice, based on the inner product he learned in step 4. This will result in the desired state
$$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z}.$$

Since Alice and Bob only exchanged $T$ qubits in the original protocol, they can also prepare the desired $n$-qubit state by exchanging the same number of qubits $T$.

JUSTIN: Up to 4 points.
This answer is partially correct and/or poorly worded. In particular, in step 3 Alice just sends the whole n-qubit state to Bob.
The right idea is in there, but doesn't earn full points.

2/4.

(b) Assume Alice and Bob have no preshared entanglement.  Recall Holevo's Theorem, which implies that in order to communicate $n$ bits to Bob reliably, Alice must send Bob at least $n$ qubits.  Using Holevo's Theorem together with part (a), prove that Alice and Bob must exchange at least $n$ qubits, even if they only want to learn the inner product mod 2 of their input strings $x$ and $y$.

From part (a), we know that if Alice and Bob have a quantum communication protocol that allows Bob to know the mod-2 inner product of their strings with success probability 1, then they can also prepare an $n$-qubit state of the form
$$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z},$$
by exchanging the same number of qubits $T$.

Now, let's consider the reverse process. Suppose Bob has the $n$-qubit state
$$\frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{x\cdot z} \ket{z}.$$
He can perform a quantum measurement in the computational basis to obtain a random $n$-bit string $z$. Then, he can compute the mod-2 inner product $x \cdot z$ and send it back to Alice.

Alice, knowing her own string $x$, can compute the mod-2 inner product $x \cdot z$ as well. If the inner product $x \cdot z$ that Bob sent to Alice matches the one she computed, then they have successfully communicated the inner product mod 2 of their input strings $x$ and $y$.

However, according to Holevo's Theorem, in order to communicate $n$ bits to Bob reliably, Alice must send Bob at least $n$ qubits. Since the $n$-qubit state prepared in part (a) effectively communicates $n$ bits of information (Alice's string $x$) to Bob, Holevo's Theorem implies that Alice and Bob must exchange at least $n$ qubits to prepare this state.

Therefore, even if Alice and Bob only want to learn the inner product mod 2 of their input strings $x$ and $y$, they must exchange at least $n$ qubits.

JUSTIN: Up to 6 points.
This answer doesn't earn any points. It only says that Alice and Bob learn the inner products and mistakenly claims this is n bits of information.
The correct solution recognizes that applying Hadamard to the state we showed how to prepare in part (a) gives exactly |x>.

0/6.

(c) Now suppose we're no longer working mod 2, and Alice and Bob want to know whether their inner product
$$x_1 y_1 + \ldots + x_n y_n$$
is zero or nonzero as an integer.  (In other words, whether there's an $i$ such that $x_i=y_i=1$.)  Describe a protocol by which Alice and Bob can accomplish this, with high probability, by exchanging only $O(\sqrt{n}\log n )$ qubits in total.  The qubits can be spread across as many rounds of communication as necessary, and can be sent in either direction.

Alice and Bob can use the following protocol, based on the Grover's search algorithm, to determine whether their inner product is zero or nonzero as an integer, with high probability, by exchanging only $O(\sqrt{n}\log n)$ qubits in total:

1. Alice prepares a quantum register of $n$ qubits in the state $\ket{0}^{\otimes n}$.

2. Alice applies the Hadamard gate to each qubit, resulting in the state $$\ket{\psi} = \frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} \ket{z}.$$

3. Alice and Bob perform the following steps for $O(\sqrt{n})$ iterations:

   a. Alice sends the $n$-qubit state $\ket{\psi}$ to Bob.

   b. Bob applies a controlled-phase gate to each qubit of $\ket{\psi}$, using his $n$-bit string $y$ as the control. This results in the state
   $$\ket{\psi'} = \frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{y\cdot z} \ket{z}.$$

   c. Bob sends the $n$-qubit state $\ket{\psi'}$ back to Alice.

   d. Alice applies a controlled-phase gate to each qubit of $\ket{\psi'}$, using her $n$-bit string $x$ as the control. This results in the state
   $$\ket{\psi''} = \frac{1}{\sqrt{2^n}} \sum_{z \in \{0,1\}^n} (-1)^{(x+y)\cdot z} \ket{z}.$$
   Note that if $x_i = y_i = 1$ for some $i$, then the phase of the corresponding term in the sum will be flipped twice, resulting in no change. Therefore, the state $\ket{\psi''}$ will have a non-zero amplitude for the terms corresponding to the indices where $x_i = y_i = 1$.

   e. Alice applies the Grover's diffusion operator to the state $\ket{\psi''}$, which amplifies the amplitudes of the terms corresponding to the indices where $x_i = y_i = 1$. The diffusion operator can be implemented as follows:
      - Apply the Hadamard gate to each qubit.
      - Apply a multi-qubit controlled-Z gate, with all qubits as controls and an ancilla qubit as the target.
      - Apply the Hadamard gate to each qubit again.

   f. Alice updates the state $\ket{\psi}$ with the resulting state after applying the Grover's diffusion operator.

4. After $O(\sqrt{n})$ iterations, Alice measures the final state in the computational basis. If she obtains a non-zero result, then there exists an $i$ such that $x_i = y_i = 1$, and their inner product is nonzero as an integer. If she obtains a zero result, then their inner product is zero.

Since Alice and Bob exchange $n$ qubits in each iteration, and they perform $O(\sqrt{n})$ iterations, the total number of qubits exchanged is $O(n\sqrt{n}) = O(\sqrt{n}\log n)$, as each qubit can represent $\log n$ bits of information. Thus, Alice and Bob can determine whether their inner product is zero or nonzero as an integer, with high probability, by exchanging only $O(\sqrt{n}\log n)$ qubits in total.

SCOTT: LOOKS BASICALLY RIGHT, EXCEPT FOR A SLIP-UP AT THE END WHERE IT SAYS
ALICE AND BOB EXCHANGE n QUBITS PER ITERATION RATHER THAN log n ...

JUSTIN: Up to 6 extra credit points.
Yeah, step 3a says it's sending n qubits, when it's actually log n.
It also seems incorrect beginning in the analysis of step d. It switches from summing over |z> and referring to the full strings x and y to referring to individual indices i,x_i,y_i. And I don't know what it means about some amplitudes going to zero since what we're really doing is having some amplitudes be +1 and some -1.
I don't think this protocol is correct as stated.
But, it has some of the main ideas. The analysis of the grover steps is good and using a phase gate is good.

3/6 extra credit.

OK, last question, about the k-SUM problem. Total score: 6 / 15.

7. In the famous $k$-SUM problem, we're given a list of integers $x_1,\ldots,x_n$, and are asked whether there are $k$ distinct indices, $i_1 < \ldots < i_k$, such that $x_{i_1} + \ldots + x_{i_k} = 0$.

For this problem, you can ignore factors of $\log n$ in the running time (indeed, that is what the $\widetilde{O}$ notation means).

(a) Assume $k$ is even AND that we are allowed multi-sets (aka repeated elements are allowed). Describe a classical algorithm that solves the $k$-SUM problem in $\widetilde{O}(n^{k/2})$ time, beating the trivial upper bound of $\widetilde{O}(n^k)$.

We can solve the k-SUM problem in $\widetilde{O}(n^{k/2})$ time using a meet-in-the-middle approach. Here's a high-level description of the algorithm:

1. Divide the list of integers into two equal halves: $A = x_1, \ldots, x_{n/2}$ and $B = x_{n/2+1}, \ldots, x_n$.

2. Compute all possible sums of $k/2$ elements from each half. For each half, this can be done in $O(n^{k/2})$ time by iterating through all possible combinations of $k/2$ elements.

3. Sort the lists of sums for both halves. This can be done in $O(n^{k/2} \log n)$ time using a comparison-based sorting algorithm.

4. For each sum in the first half, perform a binary search in the second half to find a sum that is equal to the negation of the sum from the first half. If such a sum is found, then we have found k distinct indices whose elements sum to 0. Binary search takes $O(\log n)$ time, so this step takes $O(n^{k/2} \log n)$ time.

Since we are ignoring factors of $\log n$, the overall running time of the algorithm is $\widetilde{O}(n^{k/2})$.

SCOTT: LOOKS LIKE IT WOULD WORK *IF* WE KNEW THE k-SET WAS EQUALLY BALANCED
BETWEEN THE LEFT AND RIGHT HALVES...

JUSTIN: Up to 5 points.
Right, step 1 is incorrect. Instead, it should generate all O(n^{k/2}) sums of subsets of size k/2. Nothing about dividing into two halves.

4/5.

(b) Assume $k$ is divisible by 3 and that we are again allowed multi-sets.  Describe a quantum algorithm that solves the $k$-SUM problem in $\widetilde{O}(n^{k/3})$ time.

We can solve the k-SUM problem in $\widetilde{O}(n^{k/3})$ time using a quantum algorithm based on Grover's search. Here's a high-level description of the algorithm:

1. Divide the list of integers into three equal parts: $A = x_1, \ldots, x_{n/3}$, $B = x_{n/3+1}, \ldots, x_{2n/3}$, and $C = x_{2n/3+1}, \ldots, x_n$.

2. Compute all possible sums of $k/3$ elements from each part. For each part, this can be done in $O(n^{k/3})$ time by iterating through all possible combinations of $k/3$ elements.

3. Sort the lists of sums for all three parts. This can be done in $O(n^{k/3} \log n)$ time using a comparison-based sorting algorithm.

4. Define an oracle function $f$ that takes as input a sum from the first part and a sum from the second part, and returns 1 if there exists a sum in the third part that is equal to the negation of the sum of the inputs, and 0 otherwise. This oracle can be implemented using a binary search on the sorted list of sums for the third part, which takes $O(\log n)$ time.

5. Use Grover's search algorithm to find a pair of sums from the first and second parts that satisfy the oracle function. Grover's search has a quadratic speedup over classical search, so it takes $O(\sqrt{n^{k/3}}) = O(n^{k/6})$ time.

Since we are ignoring factors of $\log n$, the overall running time of the algorithm is $\widetilde{O}(n^{k/3})$.

JUSTIN: Up to 5 points.
Again, step 1 is incorrect, it shouldn't divide into three parts.
The division into three parts is more problematic this time as it comes up in step 4.
And it ends up claiming n^{k/6] time in step 5, better than the n^{k/3}.
Basically all this gets correct is to use Grover in some way.

1/5.

(c) Suppose we wanted to prove that the algorithm from (b) was the fastest possible quantum algorithm for $k$-SUM.  Could that be shown via a lower bound on $k$-SUM's quantum query complexity?  Why or why not?

Proving that the algorithm from (b) is the fastest possible quantum algorithm for k-SUM would require showing a lower bound on k-SUM's quantum query complexity that matches the upper bound achieved by the algorithm. However, a lower bound on the quantum query complexity alone might not be sufficient to prove the optimality of the algorithm, as it only considers the number of queries to the oracle function and does not account for other computational resources, such as time or space complexity.

To show that the algorithm from (b) is the fastest possible quantum algorithm for k-SUM, one would need to establish a lower bound on the quantum time complexity or some other relevant complexity measure that matches the upper bound achieved by the algorithm. This would provide a more comprehensive understanding of the problem's inherent difficulty and the limitations of quantum algorithms in solving it.

JUSTIN: Up to 5 extra credit points.
This is a nice description of query vs time complexity, but misses the point that query complexity can prove a bound of at most Omega(n) since there are at most n potential queries to make.

1/5 extra credit points.