I believe that the current containment measures (in particular implemented as severely as in Italy, Spain, etc.) will in the long run do more harm than the virus. For example, optimistic projections by The Guardian for Ireland indicate that strict containment measures could force as much as 25% of the entire population out of a job, at which point it is longer a question of giving people financial support but rather of how to restart an economy that has come to a grinding halt. With such ramifications in plain sight, I conjecture that it is only a question of time before (for better or worse) we return to our usual modus operandi even if doing so will overstretch hospital capacity for a (hopefully only short) while.

Do you agree with my assessment / prediction? If not, why?

]]>Why make up a measure? Why not take a well-known algorithmic problem which QCs, while they have not achieved supremacy over classical computers, can at least DO, and ask how big an instance of it they can do?

At this time I still get dismissive contemptuous “it’s not that simple” responses whenever I ask someone who knows quantum computation “how big a number can QCs currently factor”, but certainly that WILL be a good measure eventually. OK, I get it. Shor’s algorithm isn’t far too complicated to be implementable on QCs we can build today. But I’m not asking that. I’m just asking for ANY algorithmic problem that QCs have nontrivial power to do even if they’re far short of supremacy to classical computers or even to humans working with pencil and paper. It doesn’t have to be Shor factoring or Grover searching, just something that is framed as an algorithmic problem and which QCs can be built to tackle with nontrivial performance.

Of course we know “generate random bits” is such a thing, so this criterion is already achieved. Call that level 0. For level 1, it has to solve a problem nontrivially harder than that.

]]>Jorge Hirsch repudiates the h-index he invented (M): “I have now come to believe that it can also fail spectacularly and have severe unintended negative consequences. I can understand how the sorcerer’s apprentice must have felt.” (This is an aside; the actual linked article is about Hirsch’s difficulty in breaching the orthodox consensus on superconductivity.)

Now “breaching the orthodox consensus” sounds interesting, so I read it. He writes:

I expected back then that ‘this mechanism’ would be quickly accepted … Alternatively, that somebody would prove me wrong, so I could move on. So where are we 30 years later?

I have not moved on. I have since then published well over 100 papers on hole superconductivity …, the papers have been by and large ignored and the community is as unconvinced as it was 30 years ago (or even more) that this has anything to do with real world superconductivity.

The part “somebody would prove me wrong, so I could move on” touched me. Not because I have an opinion on hole superconductivity, or even the ability to have an opinion (if I would spend time on it). But because of the cases were I could prove somebody wrong:

I have read pages 1-6 and browsed the rest of the draft. From the results, my guess is that Theorem 11, Lemma 13, and Theorem 14 will hold water. I am skeptical about the XXX result, as you may have guessed. However, let me make it clear that I will study your draft because I expect to learn interesting ideas and concepts while studying it, not because I want to disprove your XXX result.

I am not even sure whether I would be able to make somebody accept that his proof of some lemma or theorem is flawed, if the sole purpose why I read his paper would have been to prove him wrong. But this is only part of the truth. Often I know exactly why some proof is wrong and already wrote something like “The difficulty is that “iff” instead of “if” in Corollary 11 is not justified (the proof of Lemma 10 is wrong). What happens is that …” to the author. But am too lazy to follow up, if my explanations did not convince the author yet. And then, there are the realities and constraints of my actual life. So I have to answer:

sadly, I didn’t find time to do anything related to logic since I last answered you. …

However, you will probably be able to find issues yourself in the less expected parts of your paper, if you try seriously.

But in the end, I don’t really know whether an author would benefit from being proven wrong. Russell did prove Frege wrong, and Frege accepted it. But subsequently, especially since his wife died soon after, he became depressed for many years.

]]>they’ll soon have what they call “the most powerful” quantum computer

Does anybody take such announcements seriously after Rigetti’s 128-qubit-QC-in-one-year announcement? (Does Honeywell have a crystal ball to know that nobody else will make any strides soon?)

]]>In practice, depth is more important than width, for the same reason that qubit quality is more important than qubit count. And higher connectivity corresponds to increased depth.

If you relax that requirement, then depth would need to be something like k^2 or more.

I’m usually just interested in the size of the circuits that I would be able to move from an emulator to actual hardware.

In a sense, that’s the effective size of the QC. And i would use the H/Toffoli gateset here since it has the least platform affinity and is a nice reality check given how expensive AND gates are on most implementations.

If your hardware doesn’t run all 20×20 H/AND circuits with fidelity 2/3 or more then you don’t have a 20×20 QC. Fair? 🙂

]]>