It’d be GREAT if you could join us at Stanford. All the prof’s I’ve been talking to have been talking about you and how it’d be great if you joined here.

Looking forward to your decision and seeing you at Stan!!

Ajay

]]>Ooops! (and bugger it, damn, and blast)

Many apologies: low caffeine + too little sleep = idiot who gets names wrong

Cheers,

Andrew

]]>The slide is cool. Somebody call Penn & Teller! 🙂

]]>thank you for the answer. I guess, all of a sudden I begin to see why this whole quantum computing business really *is* interesting 😎

]]>By “existing error rates,” do you mean the error rates that are achievable experimentally? That varies a lot with the technology, but in general I’d say closer to 45% than 0.001%. 🙂 (5-20% seems like a pretty typical range.)

The error rate is *defined* as the number of errors per qubit per timestep. Of course, if your timesteps take twice as long, then *all else being equal* you’ll encounter roughly twice as many errors per timestep. But all else is *not* necessarily equal, since slowing down the computation might also slow down the error processes.

Scott’s remark has the following natural extension: *[…], and people are working just as hard to raise the lower bound of the theshold theorem, which is to say, to show that for “typical” quantum systems, error rates lower than 45% still allow polynomial-time classical simulation.*

Notice that the above carefully says “people are working just as hard”, rather than “just as many people are working”! 🙂

The two problems are of precisely equal fundamental interest (obviously). From a practical point of view … well … raising the lower bound has a broader range of near-term applications.

Indeed, the following—deliberately provocative—conjecture seems natural (to me as an engineer): *Given a system composed of “n” total qubits, linked by any number of gates, having a fixed nonzero error rate per gate, and a fixed nonzero probability of correct computation, that system can be simulated with classical resources that scale polynomially in “n”*

That this conjecture might be compatible with the threshold theorems is (hopefully) ensured by the restriction “fixed nonzero probability of correct computation”, which is intended to force larger-n devices to allocate a larger fraction of their qubit resources to ancilla quibits and/or error correction codes.

Thinking about this conjecture leads me to imagine a hilarious Alice-and-Bob “arms race”, in which for fixed *n* Alice implements a successful error detection code, which Bob then breaks by doubling *n*, forcing Alice to implement yet another level of error detection codes, etc.

At this point it becomes pretty clear the error threshold theorems are darn tricky, and that the details of their assumptions *do* matter.

Which makes me hope that someone on Scott’s blog—please!—will explain the underlying assumptions of the threshold theorems in greater detail!

]]>