Archive for the ‘Quantum’ Category

Paul Bernays Lectures

Monday, September 9th, 2019

Last week, I had the honor of giving the annual Paul Bernays Lectures at ETH Zürich. My opening line: “as I look at the list of previous Bernays Lecturers—many of them Nobel physics laureates, Fields Medalists, etc.—I think to myself, how badly did you have to screw up this year in order to end up with me?”

Paul Bernays was the primary assistant to David Hilbert, before Bernays (being Jewish by birth) was forced out of Göttingen by the Nazis in 1933. He spent most of the rest of his career at ETH. He’s perhaps best known for the von Neumann-Bernays-Gödel set theory, and for writing (in a volume by “Hilbert and Bernays,” but actually just Bernays) arguably the first full proof of Gödel’s Second Incompleteness Theorem.

Anyway, the idea of the Paul Bernays Lectures is to rotate between Bernays’s different interests in his long, distinguished career—interests that included math, philosophy, logic, and the foundations of physics. I mentioned that, if there’s any benefit to carting me out to Switzerland for these lectures, it’s that quantum computing theory combines all of these interests. And this happens to be the moment in history right before we start finding out, directly from experiments, whether quantum computers can indeed solve certain special problems much faster.

The general theme for my three lectures was “Quantum Computing and the Fundamental Limits of Computation.” The attendance was a few hundred. My idea was to take the audience from Church and Turing in the 1930s, all the way to the quantum computational supremacy experiments that Google and others are doing now—as part of a single narrative.

If you’re interested, streaming video of the lectures is available as of today (though I haven’t watched it—let me know if the quality is OK!), as well as of course my slides. Here you go:

Lecture 1: The Church-Turing Thesis and Physics (watch streaming / PowerPoint slides) (with an intro in German by Giovanni Sommaruga, who knew Bernays, and a second intro in English by Renato Renner, who appeared on this blog here)

Abstract: Is nature computable?  What should we even mean in formulating such a question?  For generations, the identification of “computable” with “computable by a Turing machine” has been seen as either an arbitrary mathematical definition, or a philosophical or psychological claim. The rise of quantum computing and information, however, has brought a fruitful new way to look at the Church-Turing Thesis: namely, as a falsifiable empirical claim about the physical universe.  This talk seeks to examine the computability of the laws of physics from a modern standpoint—one that fully incorporates the insights of quantum mechanics, quantum field theory, quantum gravity, and cosmology.  We’ll critically assess ‘hypercomputing’ proposals involving (for example) relativistic time dilation, black holes, closed timelike curves, and exotic cosmologies, and will make a 21st-century case for the physical Church-Turing Thesis.

Lecture 2: The Limits of Efficient Computation (watch streaming / PowerPoint slides)

Abstract: Computer scientists care about what’s computable not only in principle, but within the resource constraints of the physical universe.  Closely related, which types of problems are solvable using a number of steps that scales reasonably (say, polynomially) with the problem size?  This lecture will examine whether the notorious NP-complete problems, like the Traveling Salesman Problem, are efficiently solvable using the resources of the physical world.  We’ll start with P=?NP problem of classical computer science—its meaning, history, and current status.  We’ll then discuss quantum computers: how they work, how they can sometimes yield exponential speedups over classical computers, and why many believe that not even they will do so for the NP-complete problems.  Finally, we’ll critically assess proposals that would use exotic physics to go even beyond quantum computers, in terms of what they would render computable in polynomial time.

Lecture 3: The Quest for Quantum Computational Supremacy (watch streaming / PowerPoint slides)

Abstract: Can useful quantum computers be built in our world?  This talk will discuss the current status of the large efforts currently underway at Google, IBM, and many other places to build noisy quantum devices, with 50-100 qubits, that can clearly outperform classical computers at least on some specialized tasks — a milestone that’s been given the unfortunate name of “quantum supremacy.”  We’ll survey recent theoretical work (on BosonSampling, random circuit sampling, and more) that aims to tell us: which problems should we give these devices, that we’re as confident as possible are hard for classical computers? And how should we check whether the devices indeed solved them?  We’ll end by discussing a new protocol, for generating certified random bits, that can be implemented almost as soon as quantum supremacy itself is achieved, and which might therefore become the first application of quantum computing to be realized.

Finally, thanks so much to Giovanni Sommaruga and everyone else at ETH for arranging a fantastic visit.

Links, proofs, talks, jokes

Tuesday, July 30th, 2019

For those who haven’t yet seen it, Erica Klarreich has a wonderful article in Quanta on Hao Huang’s proof of the Sensitivity Conjecture. This is how good popular writing about math can be.

Klarreich quotes my line from this blog, “I find it hard to imagine that even God knows how to prove the Sensitivity Conjecture in any simpler way than this.” However, even if God doesn’t know a simpler proof, that of course doesn’t rule out the possibility that Don Knuth does! And indeed, a couple days ago Knuth posted his own variant of Huang’s proof on his homepage—in Knuth’s words, fleshing out the argument that Shalev Ben-David previously posted on this blog—and then left a comment about it here, the first comment by Knuth that I know about on this blog or any other blog. I’m honored—although as for whether the variants that avoid the Cauchy Interlacing Theorem are actually “simpler,” I guess I’ll leave that between Huang, Ben-David, Knuth, and God.

In Communications of the ACM, Samuel Greengard has a good, detailed article on Ewin Tang and her dequantization of the quantum recommendation systems algorithm. One warning (with thanks to commenter Ted): the sentence “The only known provable separation theorem between quantum and classical is sqrt(n) vs. n” is mistaken, though it gestures in the direction of a truth. In the black-box setting, we can rigorously prove all sorts of separations: sqrt(n) vs. n (for Grover search), exponential (for period-finding), and more. In the non-black-box setting, we can’t prove any such separations at all.

Last week I returned to the US from the FQXi meeting in the Tuscan countryside. This year’s theme was “Mind Matters: Intelligence and Agency in the Physical World.” I gave a talk entitled “The Search for Physical Correlates of Consciousness: Lessons from the Failure of Integrated Information Theory” (PowerPoint slides here), which reprised my blog posts critical of IIT from five years ago. There were thought-provoking talks by many others who might be known to readers of this blog, including Sean Carroll, David Chalmers, Max Tegmark, Seth Lloyd, Carlo Rovelli, Karl Friston … you can see the full schedule here. Apparently video of the talks is not available yet but will be soon.

Let me close this post by sharing two important new insights about quantum mechanics that emerged from my conversations at the FQXi meeting:

(1) In Hilbert space, no one can hear you scream. Unless, that is, you scream the exact same way everywhere, or unless you split into separate copies, one for each different way of screaming.

(2) It’s true that, as a matter of logic, the Schrödinger equation does not imply the Born Rule. Having said that, if the Schrödinger equation were leading a rally, and the crowd started a chant of “BORN RULE! BORN RULE! BORN RULE!”—the Schrödinger equation would just smile and wait 13 seconds for the chant to die down before continuing.

John Wright joins UT Austin

Wednesday, July 3rd, 2019

I’m delighted to announce that quantum computing theorist John Wright will be joining the computer science faculty at UT Austin in Fall 2020, after he finishes a one-year postdoc at Caltech.

John made an appearance on this blog a few months ago, when I wrote about the new breakthrough by him and Anand Natarajan: namely, that MIP* (multi-prover interactive proofs with entangled provers) contains NEEXP (nondeterministic double-exponential time). Previously, MIP* had only been known to contain NEXP (nondeterministic single exponential time). So, this is an exponential expansion in the power of entangled provers over what was previously known and believed, and the first proof that entanglement actually increases the power of multi-prover protocols, rather than decreasing it (as it could’ve done a priori). Even more strikingly, there seems to be no natural stopping point: MIP* might soon swallow up arbitrary towers of exponentials or even the halting problem (!). For more, see for example this Quanta article, or this post by Thomas Vidick, or this short story [sic] by Henry Yuen.

John grew up in Texas, so he’s no stranger to BBQ brisket or scorching weather. He did his undergrad in computer science at UT Austin—my colleagues remember him as a star—and then completed his PhD with Ryan O’Donnell at Carnegie Mellon, followed by a postdoc at MIT. Besides the work on MIP*, John is also well-known for his 2015 work with O’Donnell pinning down the sample complexity of quantum state tomography. Their important result, a version of which was independently obtained by Haah et al., says that if you want to learn an unknown d-dimensional quantum mixed state ρ to a reasonable precision, then ~d2 copies of ρ are both necessary and sufficient. This solved a problem that had personally interested me, and already plays a role in, e.g., my work on shadow tomography and gentle measurements.

Our little quantum information center at UT Austin is growing rapidly. Shyam Shankar, a superconducting qubits guy who previously worked in Michel Devoret’s group at Yale, will also be joining UT’s Electrical and Computer Engineering department this fall. I’ll have two new postdocs—Andrea Rocchetto and Yosi Atia—as well as new PhD students. We’ll continue recruiting this coming year, with potential opportunities for students, postdocs, faculty, and research scientists across the CS, physics, and ECE departments as well as the Texas Advanced Computing Center (TACC). I hope you’ll consider applying to join us.

With no evaluative judgment attached, I can honestly say that this is an unprecedented time for quantum computing as a field. Where once faculty applicants struggled to make a case for quantum computing (physics departments: “but isn’t this really CS?” / CS departments: “isn’t it really physics?” / everyone: “couldn’t this whole QC thing, like, all blow over in a year?”), today departments are vying with each other and with industry players and startups to recruit talented people. In such an environment, we’re fortunate to be doing as well as we are. We hope to continue to expand.

Meanwhile, this was an unprecedented year for CS hiring at UT Austin more generally. John Wright is one of at least four new faculty (probably more) who will be joining us. It’s a good time to be in CS.

A huge welcome to John, and hook ’em Hadamards!

(And for US readers: have a great 4th! Though how could any fireworks match the proof of the Sensitivity Conjecture?)

Quantum Sabinacy

Monday, July 1st, 2019

Sabine Hossenfelder—well-known to readers of Shtetl-Optimized for opposing the building of a higher-energy collider, and various other things—has weighed in on “quantum supremacy” in this blog post and this video. Sabine consulted with me by phone before doing the video and post, and despite what some might see as her negative stance, I agree with what she has to say substantially more than I disagree.

I do, however, have a few quibbles:

1. We don’t know that millions of physical qubits will be needed for useful simulations of quantum chemistry.  It all depends on how much error correction is needed and how good the error-correcting codes and simulation algorithms become. Like, sure, you can generate pessimistic forecasts by plugging numbers in to the best known codes and algorithms. But “the best known” is a rapidly moving target—one where there have already been orders-of-magnitude improvements in the last decade.

2. To my mind, there’s a big conceptual difference between a single molecule that you can’t efficiently simulate classically, and a programmable computer that you can’t efficiently simulate classically.  The difference, in short, is that only for the computer, and not for the molecule, would it ever make sense to say it had given you a wrong answer! In other words, a physical system becomes a “computer” when, and only when, you have sufficient understanding of, and control over, its state space and time evolution that you can ask the system to simulate something other than itself, and then judge whether it succeeded or failed at that goal.

3. The entire point of my recent work, on certified randomness generation (see for example here or here), is that sampling random bits with a NISQ-era device could have a practical application. That application is … I hope you’re sitting down for this … sampling random bits! And then, more importantly and nontrivially, proving to a faraway skeptic that the bits really were randomly generated.

4. While I was involved in some of the first proposals for NISQ quantum supremacy experiments (such as BosonSampling), I certainly can’t take sole credit for the idea of quantum supremacy!  The term, incidentally, was coined by John Preskill.

5. The term “NISQ” (Noisy Intermediate Scale Quantum) was also coined by John Preskill.  He had no intention of misleading investors—he just needed a term to discuss the quantum computers that will plausibly be available in the near future.  As readers of this blog know, there certainly has been some misleading of investors (and journalists, and the public…) about the applications of near-term QCs. But I don’t think you can lay it at the feet of the term “NISQ.”

Quanta of Solace

Thursday, June 20th, 2019

In Quanta magazine, Kevin Hartnett has a recent article entitled A New Law to Describe Quantum Computing’s Rise? The article discusses “Neven’s Law”—a conjecture, by Hartmut Neven (head of Google’s quantum computing effort), that the number of integrated qubits is now increasing exponentially with time, so that the difficulty of simulating a state-of-the-art QC on a fixed classical computer is increasing doubly exponentially with time. (Jonathan Dowling tells me that he expressed the same thought years ago.)

Near the end, the Quanta piece quotes some UT Austin professor whose surname starts with a bunch of A’s as follows:

“I think the undeniable reality of this progress puts the ball firmly in the court of those who believe scalable quantum computing can’t work. They’re the ones who need to articulate where and why the progress will stop.”

The quote is perfectly accurate, but in context, it might give the impression that I’m endorsing Neven’s Law. In reality, I’m reluctant to fit a polynomial or an exponential or any other curve through a set of numbers that so far hasn’t exceeded about 50. I say only that, regardless of what anyone believes is the ultimate rate of progress in QC, what’s already happened today puts the ball firmly in the skeptics’ court.

Also in Quanta, Anil Ananthaswamy has a new article out on How to Turn a Quantum Computer Into the Ultimate Randomness Generator. This piece covers two schemes for using a quantum computer to generate “certified random bits”—that is, bits you can prove are random to a faraway skeptic. one due to me, the other due to Brakerski et al. The article cites my paper with Lijie Chen, which shows that under suitable computational assumptions, the outputs in my protocol are hard to spoof using a classical computer. The randomness aspect will be addressed in a paper that I’m currently writing; for now, see these slides.

As long as I’m linking to interesting recent Quanta articles, Erica Klarreich has A 53-Year-Old Network Coloring Conjecture is Disproved. Briefly, Hedetniemi’s Conjecture stated that, given any two finite, undirected graphs G and H, the chromatic number of the tensor product G⊗H is just the minimum of the chromatic numbers of G and H themselves. This reasonable-sounding conjecture has now been falsified by Yaroslav Shitov. For more, see also this post by Gil Kalai—who appears here not in his capacity as a quantum computing skeptic.

In interesting math news beyond Quanta magazine, the Berkeley alumni magazine has a piece about the crucial, neglected topic of mathematicians’ love for Hagoromo-brand chalk (hat tip: Peter Woit). I can personally vouch for this. When I moved to UT Austin three years ago, most offices in CS had whiteboards, but I deliberately chose one with a blackboard. I figured that chalk has its problems—it breaks, the dust gets all over—but I could live with them, much more than I could live with the Fundamental Whiteboard Difficulty, of all the available markers always being dry whenever you want to explain anything. With the Hagoromo brand, though, you pretty much get all the benefits of chalk with none of the downsides, so it just strictly dominates whiteboards.

Jan Kulveit asked me to advertise the European Summer Program on Rationality (ESPR), which will take place this August 13-23, and which is aimed at students ages 16-19. I’ve lectured both at ESPR and at a similar summer program that ESPR was modeled after (called SPARC)—and while I was never there as a student, it looked to me like a phenomenal experience. So if you’re a 16-to-19-year-old who reads this blog, please consider applying!

I’m now at the end of my annual family trip to Tel Aviv, returning to the Eastern US tonight, and then on to STOC’2019 at the ACM Federated Computing Research Conference in Phoenix (which I can blog about if anyone wants me to). It was a good trip, although marred by my two-year-old son Daniel falling onto sun-heated metal and suffering a second-degree burn on his leg, and then by the doctor botching the treatment. Fortunately Daniel’s now healing nicely. For future reference, whenever bandaging a burn wound, be sure to apply lots of Vaseline to prevent the bandage from drying out, and also to change the bandage daily. Accept no fancy-sounding substitute.

NP-complete Problems and Physics: A 2019 View

Sunday, June 2nd, 2019

If I want to get back to blogging on a regular basis, given the negative amount of time that I now have for such things, I’ll need to get better at dispensing with pun-filled titles, jokey opening statements, etc. etc., and resigning myself to a less witty, more workmanlike blog.

So in that spirit: a few weeks ago I gave a talk at the Fields Institute in Toronto, at a symposium to celebrate Stephen Cook and the 50th anniversary (or actually more like 48th anniversary) of the discovery of NP-completeness. Thanks so much to the organizers for making this symposium happen.

You can watch the video of my talk here (or read the PowerPoint slides here). The talk, on whether NP-complete problems can be efficiently solved in the physical universe, covers much the same ground as my 2005 survey article on the same theme (not to mention dozens of earlier talks), but this is an updated version and I’m happier with it than I was with most past iterations.

As I explain at the beginning of the talk, I wasn’t going to fly to Toronto at all, due to severe teaching and family constraints—but my wife Dana uncharacteristically urged me to go (“don’t worry, I’ll watch the kids!”). Why? Because in her view, it was the risks that Steve Cook took 50 years ago, as an untenured assistant professor at Berkeley, that gave birth to the field of computational complexity that Dana and I both now work in.

Anyway, be sure to check out the other talks as well—they’re by an assortment of random nobodies like Richard Karp, Avi Wigderson, Leslie Valiant, Michael Sipser, Alexander Razborov, Cynthia Dwork, and Jack Edmonds. I found the talk by Edmonds particularly eye-opening: he explains how he thought about (the objects that we now call) P and NP∩coNP when he first defined them in the early 60s, and how it was similar to and different from the way we think about them today.

Another memorable moment came when Edmonds interrupted Sipser’s talk—about the history of P vs. NP—to deliver a booming diatribe about how what really matters is not mathematical proof, but just how quickly you can solve problems in the real world. Edmonds added that, from a practical standpoint, P≠NP is “true today but might become false in the future.” In response, Sipser asked “what does a mathematician like me care about the real world?,” to roars of approval from the audience. I might’ve picked a different tack—about how for every practical person I meet for whom it’s blindingly obvious that “in real life, P≠NP,” I meet another for whom it’s equally obvious that “in real life, P=NP” (for all the usual reasons: because SAT solvers work so well in practice, because physical systems so easily relax as their ground states, etc). No wonder it took 25+ years of smart people thinking about operations research and combinatorial optimization before the P vs. NP question was even explicitly posed.


Unrelated Announcement: The Texas Advanced Computing Center (TACC), a leading supercomputing facility in North Austin that’s part of the University of Texas, is seeking to hire a Research Scientist focused on quantum computing. Such a person would be a full participant in our Quantum Information Center at UT Austin, with plenty of opportunities for collaboration. Check out their posting!

A small post

Friday, May 3rd, 2019
  1. I really liked this article by Chris Monroe, of the University of Maryland and IonQ, entitled “Quantum computing is a marathon not a sprint.” The crazier expectations get in this field—and right now they’re really crazy, believe me—the more it needs to be said.
  2. In a piece for Communications of the ACM, Moshe Vardi came out as a “quantum computing skeptic.” But it turns out what he means by that is not that he knows a reason why QC is impossible in principle, but simply that it’s often overhyped and that it will be hard to establish a viable quantum computing industry. By that standard, I’m a “QC skeptic” as well! But then what does that make Gil Kalai or Michel Dyakonov?
  3. Friend-of-the-blog Bram Cohen asked me to link to this second-round competition for Verifiable Delay Functions, sponsored by his company Chia. Apparently the first link I provided actually mattered in sending serious entrants their way.
  4. Blogging, it turns out, is really, really hard when (a) your life has become a pile of real-world obligations stretching out to infinity, and also (b) the Internet has become a war zone, with anything you say quote-mined by people looking to embarrass you. But don’t worry, I’ll have more to say soon. In the meantime, doesn’t anyone have more questions about the research papers discussed in the previous post? Y’know, NEEXP in MIP*? SBP versus QMA? Gentle measurement of quantum states and differential privacy turning out to be almost the same subject?

Not yet retired from research

Friday, April 19th, 2019

Last night, two papers appeared on the quantum physics arXiv that my coauthors and I have been working on for more than a year, and that I’m pretty happy about.

The first paper, with Guy Rothblum, is Gentle Measurement of Quantum States and Differential Privacy (85 pages, to appear in STOC’2019). This is Guy’s first paper that has anything to do with quantum, and also my first paper that has anything to do with privacy. (What do I care about privacy? I just share everything on this blog…) The paper has its origin when I gave a talk at the Weizmann Institute about “shadow tomography” (a task where you have to measure quantum states very carefully to avoid destroying them), and Guy was in the audience, and he got all excited that the techniques sounded just like what they use to ensure privacy in data-mining, and I figured it was just some wacky coincidence and brushed him off, but he persisted, and it turned out that he was 100% right, and our two fields were often studying the same problems from different angles and we could prove it. Anyway, here’s the abstract:

In differential privacy (DP), we want to query a database about n users, in a way that “leaks at most ε about any individual user,” even conditioned on any outcome of the query. Meanwhile, in gentle measurement, we want to measure n quantum states, in a way that “damages the states by at most α,” even conditioned on any outcome of the measurement. In both cases, we can achieve the goal by techniques like deliberately adding noise to the outcome before returning it. This paper proves a new and general connection between the two subjects. Specifically, we show that on products of n quantum states, any measurement that is α-gentle for small α is also O(α)-DP, and any product measurement that is ε-DP is also O(ε√n)-gentle.

Illustrating the power of this connection, we apply it to the recently studied problem of shadow tomography. Given an unknown d-dimensional quantum state ρ, as well as known two-outcome measurements E1,…,Em, shadow tomography asks us to estimate Pr[Ei accepts ρ], for every i∈[m], by measuring few copies of ρ. Using our connection theorem, together with a quantum analog of the so-called private multiplicative weights algorithm of Hardt and Rothblum, we give a protocol to solve this problem using O((log m)2(log d)2) copies of ρ, compared to Aaronson’s previous bound of ~O((log m)4(log d)). Our protocol has the advantages of being online (that is, the Ei‘s are processed one at a time), gentle, and conceptually simple.

Other applications of our connection include new lower bounds for shadow tomography from lower bounds on DP, and a result on the safe use of estimation algorithms as subroutines inside larger quantum algorithms.

The second paper, with Robin Kothari, UT Austin PhD student William Kretschmer, and Justin Thaler, is Quantum Lower Bounds for Approximate Counting via Laurent Polynomials. Here’s the abstract:

Given only a membership oracle for S, it is well-known that approximate counting takes Θ(√(N/|S|)) quantum queries. But what if a quantum algorithm is also given “QSamples”—i.e., copies of the state |S⟩=Σi∈S|i⟩—or even the ability to apply reflections about |S⟩? Our first main result is that, even then, the algorithm needs either Θ(√(N/|S|)) queries or else Θ(min{|S|1/3,√(N/|S|)}) reflections or samples. We also give matching upper bounds.

We prove the lower bound using a novel generalization of the polynomial method of Beals et al. to Laurent polynomials, which can have negative exponents. We lower-bound Laurent polynomial degree using two methods: a new “explosion argument” that pits the positive- and negative-degree parts of the polynomial against each other, and a new formulation of the dual polynomials method.

Our second main result rules out the possibility of a black-box Quantum Merlin-Arthur (or QMA) protocol for proving that a set is large. More precisely, we show that, even if Arthur can make T quantum queries to the set S⊆[N], and also receives an m-qubit quantum witness from Merlin in support of S being large, we have Tm=Ω(min{|S|,√(N/|S|)}). This resolves the open problem of giving an oracle separation between SBP, the complexity class that captures approximate counting, and QMA.

Note that QMA is “stronger” than the queries+QSamples model in that Merlin’s witness can be anything, rather than just the specific state |S⟩, but also “weaker” in that Merlin’s witness cannot be trusted. Intriguingly, Laurent polynomials also play a crucial role in our QMA lower bound, but in a completely different manner than in the queries+QSamples lower bound. This suggests that the “Laurent polynomial method” might be broadly useful in complexity theory.

I need to get ready for our family’s Seder now, but after that, I’m happy to answer any questions about either of these papers in the comments.

Meantime, the biggest breakthrough in quantum complexity theory of the past month isn’t either of the above: it’s the paper by Anand Natarajan and John Wright showing that MIP*, or multi-prover interactive proof systems with entangled provers, contains NEEXP, or nondeterministic doubly-exponential time (!!). I’ll try to blog about this later, but if you can’t wait, check out this excellent post by Thomas Vidick.

Just says in P

Wednesday, April 17th, 2019

Recently a Twitter account started called justsaysinmice. The only thing this account does, is to repost breathless news articles about medical research breakthroughs that fail to mention that the effect in question was only observed in mice, and then add the words “IN MICE” to them. Simple concept, but it already seems to be changing the conversation about science reporting.

It occurred to me that we could do something analogous for quantum computing. While my own deep-seated aversion to Twitter prevents me from doing it myself, which of my readers is up for starting an account that just reposts one overhyped QC article after another, while appending the words “A CLASSICAL COMPUTER COULD ALSO DO THIS” to each one?

Can we reverse time to before this hypefest started?

Friday, March 15th, 2019

The purpose of this post is mostly just to signal-boost Konstantin Kakaes’s article in MIT Technology Review, entitled “No, scientists didn’t just ‘reverse time’ with a quantum computer.” The title pretty much says it all—but if you want more, you should read the piece, which includes the following droll quote from some guy calling himself “Director of the Quantum Information Center at the University of Texas at Austin”:

If you’re simulating a time-reversible process on your computer, then you can ‘reverse the direction of time’ by simply reversing the direction of your simulation. From a quick look at the paper, I confess that I didn’t understand how this becomes more profound if the simulation is being done on IBM’s quantum computer.

Incredibly, the time-reversal claim has now gotten uncritical attention in Newsweek, Discover, Cosmopolitan, my Facebook feed, and elsewhere—hence this blog post, which has basically no content except “the claim to have ‘reversed time,’ by running a simulation backwards, is exactly as true and as earth-shattering as a layperson might think it is.”

If there’s anything interesting here, I suppose it’s just that “scientists use a quantum computer to reverse time” is one of the purest examples I’ve ever seen of a scientific claim that basically amounts to a mind-virus or meme optimized for sharing on social media—discarding all nontrivial “science payload” as irrelevant to its propagation.