NSA in P/poly: The Power of Precomputation

May 22nd, 2015

Even after the Snowden revelations, there remained at least one big mystery about what the NSA was doing and how.  The NSA’s classified 2013 budget request mentioned, as a priority item, “groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.”  There was a requested increase, of several hundred million dollars, for “cryptanalytic IT services” and “cryptanalysis and exploitation services program C” (whatever that was).  And a classified presentation slide showed encrypted data being passed to a high-performance computing system called “TURMOIL,” and decrypts coming out.  But whatever was going on inside TURMOIL seemed to be secret even within NSA; someone at Snowden’s level wouldn’t have had access to the details.

So, what was (or is) inside the NSA’s cryptanalytic black box?  A quantum computer?  Maybe even one that they bought from D-Wave?  (Rimshot.)  A fast classical factoring algorithm?  A proof of P=NP?  Commentators on the Internet rushed to suggest each of these far-reaching possibilities.  Some of us tried to pour cold water on these speculations—pointing out that one could envision many scenarios that were a little more prosaic, a little more tied to the details of how public-key crypto is actually used in the real world.  Were we just naïve?

This week, a new bombshell 14-author paper (see also the website) advances an exceedingly plausible hypothesis about what may have been the NSA’s greatest cryptanalytic secret of recent years.  One of the authors is J. Alex Halderman of the University of Michigan, my best friend since junior high school, who I’ve blogged about before.  Because of that, I had some advance knowledge of this scoop, and found myself having to do what regular Shtetl-Optimized readers will know is the single hardest thing in the world for me: bite my tongue and not say anything.  Until now, that is.

Besides Alex, the other authors are David Adrian, Karthikeyan Bhargavan, Zakir Durumeric, Pierrick Gaudry, Matthew Green, Nadia Heninger, Drew Springall, Emmanuel Thomé, Luke Valenta, Benjamin VanderSloot, Eric Wustrow, Santiago Zanella-Béguelink, and Paul Zimmermann (two of these, Green and Heninger, have previously turned up on Shtetl-Optimized).

These authors study vulnerabilities in Diffie-Hellman key exchange, the “original” (but still widely-used) public-key cryptosystem, the one that predates even RSA.  Diffie-Hellman is the thing where Alice and Bob first agree on a huge prime number p and a number g, then Alice picks a secret a and sends Bob ga (mod p), and Bob picks a secret b and sends Alice gb (mod p), and then Alice and Bob can both compute (ga)b=(gb)a=gab (mod p), but an eavesdropper who’s listening in only knows p, g, ga (mod p), and gb (mod p), and one can plausibly conjecture that it’s hard from those things alone to get gab (mod p).  So then Alice and Bob share a secret unknown to the eavesdropper, which they didn’t before, and they can use that secret to start doing cryptography.

As far as anyone knows today, the best way to break Diffie-Hellman is simply by calculating discrete logarithms: that is, solving the problem of recovering a given only g and h=ga (mod p).  At least on a classical computer, the fastest known algorithm for discrete logarithms (over fields of prime order) is the number field sieve (NFS).  Under plausible conjectures about the distribution of “smooth” numbers, NFS uses time that grows like exp((1.923+o(1))(log p)1/3(log log p)2/3), where the exp and logs are base e (and yes, even the lower-order stuff like (log log p)2/3 makes a big difference in practice).  Of course, once you know the running time of the best-known algorithm, you can then try to choose a key size (that is, a value of log(p)) that’s out of reach for that algorithm on the computing hardware of today.

(Note that the recent breakthrough of Antoine Joux, solving discrete log in quasipolynomial time in fields of small characteristic, also relied heavily on sieving ideas.  But there are no improvements from this yet for the “original” discrete log problem, over prime fields.)

But there’s one crucial further fact, which has been understood for at least a decade by theoretical cryptographers, but somehow was slow to filter out to the people who deploy practical cryptosystems.  The further fact is that in NFS, you can arrange things so that almost all the discrete-logging effort depends only on the prime number p, and not at all on the specific numbers g and h for which you’re trying to take the discrete log.  After this initial “precomputation” step, you then have a massive database that you can use to speed up the “descent” step: the step of solving ga=h (mod p), for any (g,h) pair that you want.

It’s a little like the complexity class P/poly, where a single, hard-to-compute “advice string” unlocks exponentially many inputs once you have it.  (Or a bit more precisely, one could say that NFS reveals that exponentiation modulo a prime number is sort of a trapdoor one-way function, except that the trapdoor information is subexponential-size, and given the trapdoor, inverting the function is still subexponential-time, but a milder subexponential than before.)

The kicker is that, in practice, a large percentage of all clients and servers that use Diffie-Hellman key exchange use the same few prime numbers p.  This means that, if you wanted to decrypt a large fraction of all the traffic encrypted with Diffie-Hellman, you wouldn’t need to do NFS over and over: you could just do it for a few p‘s and cache the results.  This fact can singlehandedly change the outlook for breaking Diffie-Hellman.

The story is different depending on the key size, log(p).  In the 1990s, the US government insisted on “export-grade” cryptography for products sold overseas (what a quaint concept!), which meant that the key size was restricted to 512 bits.  For 512-bit keys, Adrian et al. were able to implement NFS and use it to do the precomputation step in about 7 days on a cluster with a few thousand cores.  After this initial precomputation step (which produced 2.5GB of data), doing the descent, to find the discrete log for a specific (g,h) pair, took only about 90 seconds on a 24-core machine.

OK, but no one still uses 512-bit keys, do they?  The first part of Adrian et al.’s paper demonstrates that, because of implementation issues, even today you can force many servers to “downgrade” to the 512-bit, export-grade keys—and then, having done so, you can stall for time for 90 seconds as you figure out the session key, and then do a man-in-the-middle attack and take over and impersonate the server.  It’s an impressive example of the sort of game computer security researchers have been playing for a long time—but it’s really just a warmup to the main act.

As you’d expect, many servers today are configured more intelligently, and will only agree to 1024-bit keys.  But even there, Adrian et al. found that a large fraction of servers rely on just a single 1024-bit prime (!), and many of the ones that don’t rely on just a few other primes.  Adrian et al. estimate that, for a single 1024-bit prime, doing the NFS precomputation would take about 45 million years using a single core—or to put it more ominously, 1 year using 45 million cores.  If you built special-purpose hardware, that could go down by almost two orders of magnitude, putting the monetary cost at a few hundred million dollars, completely within the reach of a sufficiently determined nation-state.  Once the precomputation was done, and the terabytes of output stored in a data center somewhere, computing a particular discrete log would then take about 30 days using 1 core, or mere minutes using a supercomputer.  Once again, none of this is assuming any algorithmic advances beyond what’s publicly known.  (Of course, it’s possible that the NSA also has some algorithmic advances; even modest ones could obviate the need for special-purpose hardware.)

While writing this post, I did my own back-of-the-envelope, and got that using NFS, calculating a 1024-bit discrete log should be about 7.5 million times harder than calculating a 512-bit discrete log.  So, extrapolating from the 7 days it took Adrian et al. to do it for 512 bits, this suggests that it might’ve taken them about 143,840 years to calculate 1024-bit discrete logs with the few thousand cores they had, or 1 year if they had 143,840 times as many cores (since almost all this stuff is extremely parallelizable).  Adrian et al. mention optimizations that they expect would improve this by a factor of 3, giving us about 100 million core-years, very similar to Adrian et al.’s estimate of 45 million core-years (the lower-order terms in the running time of NFS might account for some of the remaining discrepancy).

Adrian et al. mount a detailed argument in their paper that all of the details about NSA’s “groundbreaking cryptanalytic capabilities” that we learned from the Snowden documents match what would be true if the NSA were doing something like the above.  The way Alex put it to me is that, sure, the NSA might not have been doing this, but if not, then he would like to understand why not—for it would’ve been completely feasible within the cryptanalytic budget they had, and the NSA would’ve known that, and it would’ve been a very good codebreaking value for the money.

Now that we know about this weakness of Diffie-Hellman key exchange, what can be done?

The most obvious solution—but a good one!—is just to use longer keys.  For decades, when applied cryptographers would announce some attack like this, theorists like me would say with exasperation: “dude, why don’t you fix all these problems in one stroke by just, like, increasing the key sizes by a factor of 10?  when it’s an exponential against a polynomial, we all know the exponential will win eventually, so why not just go out to where it does?”  The applied cryptographers explain to us, with equal exasperation in their voices, that there are all sorts of reasons why not, from efficiency to (maybe the biggest thing) backwards-compatibility.  You can’t unilaterally demand 2048-bit keys, if millions of your customers are using browsers that only understand 1024-bit keys.  On the other hand, given the new revelations, it looks like there really will be a big push to migrate to larger key sizes, as the theorists would’ve suggested from their ivory towers.

A second, equally-obvious solution is to stop relying so much on the same few prime numbers in Diffie-Hellman key exchange.  (Note that the reason RSA isn’t vulnerable to this particular attack is that it inherently requires a different composite number N for each public key.)  In practice, generating a new huge random prime number tends to be expensive—taking, say, a few minutes—which is why people so often rely on “standard” primes.  At the least, we could use libraries of millions of “safe” primes, from which a prime for a given session is chosen randomly.

A third solution is to migrate to elliptic-curve cryptography (ECC), which as far as anyone knows today, is much less vulnerable to descent attacks than the original Diffie-Hellman scheme.  Alas, there’s been a lot of understandable distrust of ECC after the DUAL_EC_DBRG scandal, in which it came out that the NSA backdoored some of NIST’s elliptic-curve-based pseudorandom generators by choosing particular elliptic curves that it knew how handle.  But maybe the right lesson to draw is mod-p groups and elliptic-curve groups both seem to be pretty good for cryptography, but the mod-p groups are less good if everyone is using the same few prime numbers p (and those primes are “within nation-state range”), and the elliptic-curve groups are less good if everyone is using the same few elliptic curves.  (A lot of these things do seem pretty predictable with hindsight, but how many did you predict?)

Many people will use this paper to ask political questions, like: hasn’t the NSA’s codebreaking mission once again usurped its mission to ensure the nation’s information security?  Doesn’t the 512-bit vulnerability that many Diffie-Hellman implementations still face, as a holdover from the 1990s export rules, illustrate why encryption should never be deliberately weakened for purposes of “national security”?  How can we get over the issue of backwards-compatibility, and get everyone using strong crypto?  People absolutely should be asking such questions.

But for readers of this blog, there’s one question that probably looms even larger than those of freedom versus security, openness versus secrecy, etc.: namely, the question of theory versus practice.  Which “side” should be said to have “won” this round?  Some will say: those useless theoretical cryptographers, they didn’t even know how their coveted Diffie-Hellman system could be broken in the real world!  The theoretical cryptographers might reply: of course we knew about the ability to do precomputation with NFS!  This wasn’t some NSA secret; it’s something we discussed openly for years.  And if someone told us how Diffie-Hellman was actually being used (with much of the world relying on the same few primes), we could’ve immediately spotted the potential for such an attack.  To which others might reply: then why didn’t you spot it?

Perhaps the right lesson to draw is how silly such debates really are.  In the end, piecing this story together took a team that was willing to do everything from learning some fairly difficult number theory to coding up simulations to poring over the Snowden documents for clues about the NSA’s budget.  Clear thought doesn’t respect the boundaries between disciplines, or between theory and practice.

(Thanks very much to Nadia Heninger for reading this post and correcting a few errors in it.  For more about this, see Bruce Schneier’s post or Matt Green’s post.)

Five announcements

May 16th, 2015

1. Sanjeev Arora sent me a heads-up that there’s a discussion about the future of the STOC conference  at the Windows on Theory blog—in particular, about the idea of turning STOC into a longer “CS theory festival.”  If you have opinions about this, don’t miss the chance to make your voice heard.

2. Back in January, I blogged about a new quantum optimization algorithm by Farhi, Goldstone, and Gutmann, which was notable for being, as far as anyone could tell, the first quantum algorithm to achieve a provably better approximation ratio than the best-known classical algorithm for an NP-hard optimization problem.  Today, I report that a fearsome list of authors—Boaz Barak, Ankur Moitra, Ryan O’Donnell, Prasad Raghavendra, Oded Regev, David Steurer, Luca Trevisan, Aravindan Vijayaraghavan, David Witmer, and John Wright—has put out an eagerly-awaited paper that gives a classical algorithm for the same problem, with better performance than the quantum algorithm’s.  (They write that this “improves both qualitatively and quantitatively” on Farhi et al.’s work; I assume “qualitatively” refers to the fact that the new algorithm is classical.)  What happened, apparently, is that after I blogged (with enthusiasm) about the Farhi et al. result, a bunch of classical complexity theorists read my post and decided independently that they could match or beat the quantum algorithm’s performance classically; then they found out about each other and decided to merge their efforts.  I’m proud to say that this isn’t the first example of this blog catalyzing actual research progress, though it’s probably the best example so far.  [Update: Luca Trevisan now has a great post explaining what happened in much more detail, entitled “How Many Theoreticians Does It Take to Approximate Max 3Lin?”]

3. Jennifer Ouellette has a wonderful article in Quanta magazine about recent progress in AdS/MERA (i.e., “the emergence of spacetime from entanglement”), centered around the ideas of Brian Swingle.  This is one of the main things that I’d love to understand better right now—if I succeed even partially, you’ll know because I’ll write a blog post trying to explain it to others.  See also this blog post by Sean Carroll (about this paper by Ning Bao et al.), and this paper by Pastawski, Yoshida, Harlow, and Preskill, which explicitly mines the AdS/CFT correspondence for examples of quantum error-correcting codes.

4. Celebrity rationalist Julia Galef, who I had the great honor of meeting recently, has a podcast interview with Sean Carroll about why Carroll accepts the many-worlds interpretation.  (Or if, like me, you prefer the written word to the spoken one, click here for a full transcript.)  Unfortunately, Sean is given the opportunity at the end of the interview to recommend one science book to his listeners—just one!—but he squanders it by plugging some weird, self-indulgent thing called Quantum Computing Since Democritus.  Julia also has a YouTube video about what she learned from the interview, but I haven’t yet watched it (is there a transcript?).

5. I came across an insightful if meandering essay about nerd culture by Meredith L. Patterson.  In particular, noticing how the term “nerd” has been co-opted by normal, socially-skilled people, who’ve quickly set about remaking nerd social norms to make them identical to the rest of the world’s norms, Patterson coins the term “weird-nerd” to describe people like herself, who are still nerds in the original sense and who don’t see nerd culture as something horribly, irreparably broken.  As she writes: “We’ll start to feel less defensive when we get some indication — any indication — that our critics understand what parts of our culture we don’t want to lose and why we don’t want to lose them.”  (But is this the start of a linguistic treadmill?  Will we eventually need to talk about weird-weird-nerds, etc.?)

“Is There Something Mysterious About Math?”

April 22nd, 2015

When it rains, it pours: after not blogging for a month, I now have a second thing to blog about in as many days.  Aeon, an online magazine, asked me to write a short essay responding to the question above, so I did.  My essay is here.  Spoiler alert: my thesis is that yes, there’s something “mysterious” about math, but the main mystery is why there isn’t even more mystery than there is.  Also—shameless attempt to get you to click—the essay discusses the “discrete math is just a disorganized mess of random statements” view of Luboš Motl, who’s useful for putting flesh on what might otherwise be a strawman position.  Comments welcome (when aren’t they?).  You should also read other interesting responses to the same question by Penelope Maddy, James Franklin, and Neil Levy.  Thanks very much to Ed Lake at Aeon for commissioning these pieces.


Update (4/22): On rereading my piece, I felt bad that it didn’t make a clear enough distinction between two separate questions:

  1. Are there humanly-comprehensible explanations for why the mathematical statements that we care about are true or false—thereby rendering their truth or falsity “non-mysterious” to us?
  2. Are there formal proofs or disproofs of the statements?

Interestingly, neither of the above implies the other.  Thus, to take an example from the essay, no one has any idea how to prove that the digits 0 through 9 occur with equal frequency in the decimal expansion of π, and yet it’s utterly non-mysterious (at a “physics level of rigor”) why that particular statement should be true.  Conversely, there are many examples of statements for which we do have proofs, but which experts in the relevant fields still see as “mysterious,” because the proofs aren’t illuminating or explanatory enough.  Any proofs that require gigantic manipulations of formulas, “magically” terminating in the desired outcome, probably fall into that class, as do proofs that require computer enumeration of cases (like that of the Four-Color Theorem).

But it’s not just that proof and explanation are incomparable; sometimes they might even be at odds.  In this MathOverflow post, Timothy Gowers relates an interesting speculation of Don Zagier, that statements like the equidistribution of the digits of π might be unprovable from the usual axioms of set theory, precisely because they’re so “obviously” true—and for that very reason, there need not be anything deeper underlying their truth.  As Gowers points out, we shouldn’t go overboard with this speculation, because there are plenty of other examples of mathematical statements (the Green-Tao theorem, Vinogradov’s theorem, etc.) that also seem like they might be true “just because”—true only because their falsehood would require a statistical miracle—but for which mathematicians nevertheless managed to give fully rigorous proofs, in effect formalizing the intuition that it would take a miracle to make them false.

Zagier’s speculation is related to another objection one could raise against my essay: while I said that the “Gödelian gremlin” has remained surprisingly dormant in the 85 years since its discovery (and that this is a fascinating fact crying out for explanation), who’s to say that it’s not lurking in some of the very open problems that I mentioned, like π’s equidistribution, the Riemann Hypothesis, the Goldbach Conjecture, or P≠NP?  Conceivably, not only are all those conjectures unprovable from the usual axioms of set theory, but their unprovability is itself unprovable, and so on, so that we could never even have the satisfaction of knowing why we’ll never know.

My response to these objections is basically just to appeal yet again to the empirical record.  First, while proof and explanation need not go together and sometimes don’t, by and large they do go together: over thousands over years, mathematicians learned to seek formal proofs largely because they discovered that without them, their understanding constantly went awry.  Also, while no one can rule out that P vs. NP, the Riemann Hypothesis, etc., might be independent of set theory, there’s very little in the history of math—including in the recent history, which saw spectacular proofs of (e.g.) Fermat’s Last Theorem and the Poincaré Conjecture—that lends concrete support to such fatalism.

So in summary, I’d say that history does present us with “two mysteries of the mathematical supercontinent”—namely, why do so many of the mathematical statements that humans care about turn out to be tightly linked in webs of explanation, and also in webs of proof, rather than occupying separate islands?—and that these two mysteries are very closely related, if not quite the same.

Two papers

April 21st, 2015

Just to get myself back into the habit of blogging:

For those of you who don’t read Lance’s and Bill’s blog, there was a pretty significant breakthrough in complexity theory announced last week.  (And yes, I’m now spending one of the two or so uses of the word “breakthrough” that I allow myself per year—wait, did I just spend the second one with this sentence?)  Ben Rossman (a former MIT PhD student whose thesis committee I was honored to serve on), Rocco Servedio, and Li-Yang Tan have now shown that the polynomial hierarchy is infinite relative to a random oracle, thereby solving the main open problem from Johan Håstad’s 1986 PhD thesis.  While it feels silly even to mention it, the best previous result in this direction was to separate PNP from Σ2P relative to a random oracle, which I did in my Counterexample to the Generalized Linial-Nisan Conjecture paper.  In some sense Rossman et al. infinitely improve on that (using completely different techniques).  Proving their result boils down to proving a new lower bound on the sizes of constant-depth circuits.  Basically, they need to show that, for every k, there are problems that can be solved by small circuits with k layers of AND, OR, and NOT gates, but for which the answer can’t even be guessed, noticeably better than chance, by any small circuit with only k-1 layers of AND, OR, and NOT gates.  They achieve that using a new generalization of the method of random restrictions.  Congratulations to Ben, Rocco, and Li-Yang!

Meanwhile, if you want to know what I’ve been doing for the last couple months, one answer is contained in this 68-page labor of love preprint by me and my superb PhD students Daniel Grier and Luke Schaeffer.  There we give a full classification of all possible sets of classical reversible gates acting on bits (like the Fredkin, Toffoli, and CNOT gates), as well as a linear-time algorithm to decide whether one reversible gate generates another one (previously, that problem wasn’t even known to be decidable).  We thereby completely answer a question that basically no one was asking, although I don’t understand why not.

The ultimate physical limits of privacy

March 11th, 2015

Somewhat along the lines of my last post, the other day a reader sent me an amusing list of questions about privacy and fundamental physics.  The questions, and my answers, are below.

1. Does the universe provide us with a minimum level of information security?

I’m not sure what the question means. Yes, there are various types of information security that are rooted in the known laws of physics—some of them (like quantum key distribution) even relying on specific aspects of quantum physics—whose security one can argue for by appealing to the known properties of the physical world. Crucially, however, any information security protocol is only as good as the assumptions it rests on: for example, that the attacker can’t violate the attack model by, say, breaking into your house with an ax!

2. For example, is my information safe from entities outside the light-cone I project?

Yes, I think it’s safe to assume that your information is safe from any entities outside your future light-cone. Indeed, if information is not in your future light-cone, then almost by definition, you had no role in creating it, so in what sense should it be called “yours”?

3. Assume that there are distant alien cultures with infinite life spans – would they always be able to wait long enough for my light cone to spread to them, and then have a chance of detecting my “private” information?

First of all, the aliens would need to be in your future light-cone (see my answer to 2). In 1998, it was discovered that there’s a ‘dark energy’ pushing the galaxies apart at an exponentially-increasing rate. Assuming the dark energy remains there at its current density, galaxies that are far enough away from us (more than a few tens of billions of light-years) will always recede from us faster than the speed of light, meaning that they’ll remain outside our future light-cone, and signals from us can never reach them. So, at least you’re safe from those aliens!

For the aliens in your future light-cone, the question is subtler. Suppose you took the only piece of paper on which your secrets were written, and burned it to ash—nothing high-tech, just burned it. Then there’s no technology that we know today, or could even seriously envision, that would piece the secrets together. It would be like unscrambling an egg, or bringing back the dead from decomposing corpses, or undoing a quantum measurement. It would mean, effectively, reversing the Arrow of Time in the relevant part of the universe. This is formally allowed by the Second Law of Thermodynamics, since the decrease in entropy within that region could be balanced by an increase in entropy elsewhere, but it would require a staggering level of control over the region’s degrees of freedom.

On the other hand, it’s also true that the microscopic laws of physics are reversible: they never destroy information. And for that reason, as a matter of principle, we can’t rule out the possibility that some civilization of the very far future, whether human or alien, could piece together what was written on your paper even after you’d burned it to a crisp. Indeed, with such godlike knowledge and control, maybe they could even reconstruct the past states of your brain, and thereby piece together private thoughts that you’d never written anywhere!

4. Does living in a black hole provide privacy? Couldn’t they follow you into the hole?

No, I would not recommend jumping into a black hole as a way to ensure your privacy. For one thing, you won’t get to enjoy the privacy for long (a couple hours, maybe, for a supermassive black hole at the center of a galaxy?) before getting spaghettified on your way to the singularity. For another, as you correctly pointed out, other people could still snoop on you by jumping into the black hole themselves—although they’d have to want badly enough to learn your secrets that they wouldn’t mind dying themselves along with you, and also not being able to share whatever they learned with anyone outside the hole.

But a third problem is that even inside a black hole, your secrets might not be safe forever! Since the 1970s, it’s been thought that all information dropped into a black hole eventually comes out, in extremely-scrambled form, in the Hawking radiation that black holes produce as they slowly shrink and evaporate. What do I mean by “slowly”? Well, the evaporation would take about 1070 years for a black hole the mass of the sun, or about 10100 years for the black holes at the centers of galaxies. Furthermore, even after the black hole had evaporated, piecing together the infalling secrets from the Hawking radiation would probably make reconstructing what was on the burned paper from the smoke and ash seem trivial by comparison! But just like in the case of the burned paper, the information is still formally present (if current ideas about quantum gravity are correct), so one can’t rule out that it could be reconstructed by some civilization of the extremely remote future.

The flow of emails within the block inbox

March 7th, 2015

As a diversion from the important topics of shaming, anti-shaming, and anti-anti-shaming, I thought I’d share a little email exchange (with my interlocutor’s kind permission), which gives a good example of what I find myself doing all day when I’m not blogging, changing diapers, or thinking about possibly doing some real work (but where did all the time go?).


Dear Professor Aaronson,

I would be very pleased to know your opinion about time.  In a letter of condolence to the Besso family, Albert Einstein wrote: “Now he has departed from this strange world a little ahead of me. That means nothing. People like us, who believe in physics, know that the distinction between past, present and future is only a stubbornly persistent illusion.” I’m a medical doctor and everyday I see time’s effect over human bodies. Is Einstein saying time is an illusion?  For who ‘believe in physics’ is death an illusion?  Don’t we lose our dears and will they continue to live in an ‘eternal world’?

Is time only human perceptive illusion (as some scientists say physics has proved)?


Dear [redacted],

I don’t read Einstein in that famous quote as saying that time itself is an illusion, but rather, that the sense of time flowing from past to present to future is an illusion. He meant, for example, that the differential equations of physics can just as easily be run backward (from future to past) as forward (from past to future), and that studying physics can strongly encourage a perspective—which philosophers call the “block universe” perspective—where you treat the entire history of spacetime as just a fixed, 4-dimensional manifold, with time simply another dimension in addition to the three spatial ones (admittedly, a dimension that the laws of physics treat somewhat differently than the other three). And yes, relativity encourages this perspective, by showing that different observers, moving at different speeds relative to each other, will divide up the 4-dimensional manifold into time slices in different ways, with two events judged to be simultaneous by one observer judged to be happening at different times by another.

But even after Einstein is read this way, I’d personally respond: well, that’s just one perspective you can take. A perfectly understandable one, if you’re Einstein, and especially if you’re Einstein trying to comfort the bereaved. But still: would you want to say, for example, that because physics treats the table in front of you as just a collection of elementary particles held together by forces, therefore the table, as such, doesn’t “exist”? That seems overwrought. Physics deepens your understanding of the table, of course—showing you what its microscopic constituents are and why they hold themselves together—but the table still “exists.”  In much the same way, physics enormously deepened our understanding of what we mean by the “flow of time”—showing how the “flow” emerges from the time-symmetric equations of physics, combined with the time-asymmetric phenomena of thermodynamics, which increase the universe’s entropy as we move away from the Big Bang, and thereby allow for the creation of memories, records, and other irreversible effects (a part of the story that I didn’t even get into here). But it feels overwrought to say that, because physics gives us a perspective from which we can see the “flow of time” as emerging from something deeper, therefore the “flow” doesn’t exist, or is just an illusion.

Hope that helps!

Best,
Scott


(followup question)

Dear Professor,

I’ve been thinking about the “block universe” and it seems to me that in it past, present and future all coexist.  So on the basis of Einstein’s theory, do all exist eternally, and why do we perceive only the present?


(reply)

But you don’t perceive only the present!  In the past, you perceived what’s now the past (and which you now remember), and in the future, you’ll perceive what’s now the future (and which you now look forward to), right?  And as for why the present is the present, and not some other point in time?  Well, that strikes me as one of those questions like why you’re you, out of all the possible people who you could have been instead, or why, assuming there are billions of habitable planets, you find yourself on earth and not on any of the other planets.  Maybe the best answer is that you had to be someone, living somewhere, at some particular point in time when you asked this question—and you could’ve wondered the same thing regardless of what the answer had turned out to be.

How can we fight online shaming campaigns?

February 25th, 2015

Longtime friend and colleague Boaz Barak sent me a fascinating New York Times Magazine article that profiles people who lost their jobs or otherwise had their lives ruined, because of a single remark that then got amplified a trillionfold in importance by social media.  (The author, Jon Ronson, also has a forthcoming book on the topic.)  The article opens with Justine Sacco: a woman who, about to board a flight to Cape Town, tweeted “Going to Africa.  Hope I don’t get AIDS.  Just kidding.  I’m white!”

To the few friends who read Sacco’s Twitter feed, it would’ve been obvious that she was trying to mock the belief of many well-off white people that they live in a bubble, insulated from the problems of the Third World; she wasn’t actually mocking black Africans who suffer from AIDS.  In a just world, maybe Sacco deserved someone to take her aside and quietly explain that her tweet might be read the wrong way, that she should be more careful next time.  Instead, by the time she landed in Cape Town, she learned that she’d become the #1 worldwide Twitter trend and a global symbol of racism.  She lost her career, she lost her entire previous life, and tens of thousands of people expressed glee about it.  The article rather heartbreakingly describes Sacco’s attempts to start over.

There are many more stories like the above.  Some I’d already heard about: the father of three who lost his job after he whispered a silly joke involving “dongles” to the person next to him at a conference, whereupon Adria Richards, a woman in front of him, snapped his photo and posted it to social media, to make an example of him as a sexist pig.  (Afterwards, a counter-reaction formed, which successfully got Richards fired from her job: justice??)  Other stories I hadn’t heard.

Reading this article made it clear to me just how easily I got off, in my own recent brush with the online shaming-mobs.  Yes, I made the ‘mistake’ of writing too openly about my experiences as a nerdy male teenager, and the impact that one specific aspect of feminist thought (not all of feminism!) had had on me.  Within the context of the conversation that a few nerdy men and women were having on this blog, my opening up led to exactly the results I was hoping for: readers thoughtfully sharing their own experiences, a meaningful exchange of ideas, even (dare I say it?) glimmers of understanding and empathy.

Alas, once the comment was wrested from its original setting into the clickbait bazaar, the story became “MIT professor explains: the real oppression is having to learn to talk to women” (the title of Amanda Marcotte’s hit-piece, something even some in Marcotte’s ideological camp called sickeningly cruel).  My photo was on the front page of Salon, next to the headline “The plight of the bitter nerd.”  I was subjected to hostile psychoanalysis not once but twice on ‘Dr. Nerdlove,’ a nerd-bashing site whose very name drips with irony, rather like the ‘Democratic People’s Republic of Korea.’  There were tweets and blog comments that urged MIT to fire me, that compared me to a mass-murderer, and that “deduced” (from first principles!) all the ways in which my parents screwed up in raising me and my female students cower in fear of me.   And yes, when you Google me, this affair now more-or-less overshadows everything else I’ve done in my life.

But then … there were also hundreds of men and women who rose to my defense, and they were heavily concentrated among the people I most admire and respect.  My supporters ranged from the actual female students who took my classes or worked with me or who I encouraged in their careers, from whom there was only kindness, not a single negative word; to the shy nerds who thanked me for being one of the only people to acknowledge their reality; to the lesbians and bisexual women who told me my experience also resonated with them; to the female friends and colleagues who sent me notes urging me to ignore the nonsense.  In the end, not only have I not lost any friends over this, I’ve gained new ones, and I’ve learned new sides of the friends I had.

Oh, and I didn’t get any death threats: I guess that’s good!  (Once in my life I did get death threats—graphic, explicit threats, about which I had to contact the police—but it was because I refused to publicize someone’s P=NP proof.)

Since I was away from campus when this blew up, I did feel some fear about the professional backlash that would await me on my return.  Would my office be vandalized?  Would activist groups be protesting my classes?  Would MIT police be there to escort me from campus?

Well, you want to know what happened instead?  Students and colleagues have stopped me in the hall, or come by my office, just to say they support me.  My class has record enrollment this term.  I was invited to participate in MIT’s Diversity Summit, since the organizers felt it would mean a lot to the students to see someone there who had opened up about diversity issues in STEM in such a powerful way.  (I regretfully had to decline, since the summit conflicted with a trip to Stanford.)  And an MIT graduate women’s reading group invited me for a dinner discussion (at my suggestion, Laurie Penny participated as well).  Imagine that: not only are MIT’s women’s groups not picketing me, they’re inviting me over for dinner!  Is there any better answer to the claim, urged on me by some of my overzealous supporters, that the bile of Amanda Marcotte represents all of feminism these days?

Speaking of which, I met Laurie Penny for coffee last month, and she and I quickly hit it off.  We’ve even agreed to write a joint blog post about our advice for shy nerds.  (In my What I Believe post, I had promised a post of advice for shy female nerds—but at Laurie’s urging, we’re broadening the focus to shy nerds of both sexes.)  Even though Laurie’s essay is the thing that brought me to the attention of the Twitter-mobs (which wasn’t Laurie’s intent!), and even though I disagreed with several points in her essay, I knew on reading it that Laurie was someone I’d enjoy talking to.  Unlike so much writing by online social justice activists, which tends to be encrusted with the specialized technical terms of that field—you know, terms like “asshat,” “shitlord,” “douchecanoe,” and “precious feefees of entitled white dudes”—Laurie’s prose shone with humanity and vulnerability: her own, which she freely shared, and mine, which she generously acknowledged.

Overall, the response to my comment has never made me happier or more grateful to be part of the STEM community (I never liked the bureaucratic acronym “STEM,” but fine, I’ll own it).  To many outsiders, we STEM nerds are a sorry lot: we’re “sperglords” (yes, slurs are fine, as long as they’re directed against the right targets!) who might be competent in certain narrow domains, but who lack empathy and emotional depth, and are basically narcissistic children.  Yet somehow when the chips were down, it’s my fellow STEM nerds, and people who hang out with STEM nerds a lot, who showed me far more empathy and compassion than many of the “normals” did.  So if STEM nerds are psychologically broken, then I say: may I surround myself, for the rest of my life, with men and women who are psychologically broken like I am.  May I raise Lily, and any future children I have, to be as psychologically broken as they can be.  And may I stay as far as possible from anyone who’s too well-adjusted.

I reserve my ultimate gratitude for the many women in STEM, friends and strangers alike, who sent me messages of support these past two months.  I’m not ashamed to say it: witnessing how so many STEM women stood up for me has made me want to stand up for them, even more than I did before.  If they’re not called on often enough in class, I’ll call on them more.  If they’re subtly discouraged from careers in science, I’ll blatantly encourage them back.  If they’re sexually harassed, I’ll confront their harassers myself (well, if asked to).  I will listen to them, and I will try to improve.

Is it selfish that I want to help female STEM nerds partly because they helped me?  Here’s the thing: one of my deepest moral beliefs is in the obligation to fight for those among the disadvantaged who don’t despise you, and who wouldn’t gladly rid the planet of everyone like you if they could.  (As I’ve written before, on issue after issue, this belief makes me a left-winger by American standards, and a right-winger by academic ones.)  In the present context, I’d say I have a massive moral obligation toward female STEM nerds and toward Laurie Penny’s version of feminism, and none at all toward Marcotte’s version.

All this is just to say that I’m unbelievably lucky—privileged (!)—to have had so many at MIT and elsewhere willing to stand up for me, and to have reached in a stage in life where I’m strong enough to say what I think and to weather anything the Internet says back.  What worries me is that others, more vulnerable, didn’t and won’t have it as easy when the Twitter hate-machine turns its barrel on them.  So in the rest of this post, I’d like to discuss the problem of what to do about social-media shaming campaigns that aim to, and do, destroy the lives of individuals.  I’m convinced that this is a phenomenon that’s only going to get more and more common: something sprung on us faster than our social norms have evolved to deal with it.  And it would be nice if we could solve it without having to wait for a few high-profile suicides.

But first, let me address a few obvious questions about why this problem is even a problem at all.

Isn’t social shaming as old as society itself—and permanent records of the shaming as old as print media?

Yes, but there’s also something fundamentally new about the problem of the Twitter-mobs.  Before, it would take someone—say, a newspaper editor—to make a conscious decision to the effect, “this comment is worth destroying someone’s life over.”  Today, there might be such an individual, but it’s also possible for lives to be destroyed in a decentralized, distributed fashion, with thousands of Twitterers collaborating to push a non-story past the point of no return.  And among the people who “break” the story, not one has to intend to ruin the victim’s life, or accept responsibility for it afterward: after all, each one made the story only ε bigger than it already was.  (Incidentally, this is one reason why I haven’t gotten a Twitter account: while it has many worthwhile uses, it’s also a medium that might as well have been designed for mobs, for ganging up, for status-seeking among allies stripped of rational arguments.  It’s like the world’s biggest high school.)

Don’t some targets of online shaming campaigns, y’know, deserve it?

Of course!  Some are genuine racists or misogynists or homophobes, who once would’ve been able to inflict hatred their entire lives without consequence, and were only brought down thanks to social media.  The trouble is, the participants in online shaming campaigns will always think they’re meting out righteous justice, whether they are or aren’t.  But there’s an excellent reason why we’ve learned in modern societies not to avenge even the worst crimes via lynch mobs.  There’s a reason why we have trials and lawyers and the opportunity for the accused to show their innocence.

Some might say that no safeguards are possible or necessary here, since we’re not talking about state violence, just individuals exercising their free speech right to vilify someone, demand their firing, that sort of thing.  Yet in today’s world, trial-by-Internet can be more consequential than the old kind of trial: would you rather spend a year in jail, but then be free to move to another town where no one knew about it, or have your Google search results tarnished with lurid accusations (let’s say, that you molested children) for the rest of your life—to have that forever prevent you from getting a job or a relationship, and have no way to correct the record?  With trial by Twitter, there’s no presumption of innocence, no requirement to prove that any other party was harmed, just the law of the schoolyard.

Whether shaming is justified in a particular case is a complicated question, but for whatever it’s worth, here are a few of the questions I would ask:

  • Did the person express a wish for anyone (or any group of people) to come to harm, or for anyone’s rights to be infringed?
  • Did the person express glee or mockery about anyone else’s suffering?
  • Did the person perpetrate a grievous factual falsehood—like, something one could prove was a falsehood in a court of law?
  • Did the person violate anyone else’s confidence?
  • How much does the speaker’s identity matter?  If it had been a man rather than a woman (or vice versa) saying parallel things, would we have taken equal offense?
  • Does the comment have what obscenity law calls “redeeming social value”?  E.g., does it express an unusual viewpoint, or lead to an interesting discussion?

Of course, even in those cases where shaming campaigns are justified, they’ll sometimes be unproductive and ill-advised.

Aren’t society’s most powerful fair targets for public criticism, even mocking or vicious criticism?

Of course.  Few would claim, for example, that we have an ethical obligation to ease up on Todd Akin over his “legitimate rape” remarks, since all the rage might give Akin an anxiety attack.  Completely apart from the (de)merits of the remarks, we accept that, when you become (let’s say) an elected official, a CEO, or a university president, part of the bargain is that you no longer get to complain if people organize to express their hatred of you.

But what’s striking about the cases in the NYT article is that it’s not public figures being gleefully destroyed: just ordinary people who in most cases, made one ill-advised joke or tweet, no worse than countless things you or I have probably said in private among friends.  The social justice warriors try to justify what would otherwise look like bullying by shifting attention away from individuals: sure, Justine Sacco might be a decent person, but she stands for the entire category of upper-middle-class, entitled white women, a powerful structural force against whom the underclass is engaged in a righteous struggle.  Like in a war, the enemy must be fought by any means necessary, even if it means picking off one hapless enemy foot-soldier to make an example to the rest.  And anyway, why do you care more about this one professional white woman, than about the millions of victims of racism?  Is it because you’re a racist yourself?

I find this line of thinking repugnant.  For it perverts worthy struggles for social equality into something callous and inhuman, and thereby undermines the struggles themselves.  It seems to me to have roughly the same relation to real human rights activism as the Inquisition did to the ethical teachings of Jesus.  It’s also repugnant because of its massive chilling effect: watching a few shaming campaigns is enough to make even the most well-intentioned writer want to hide behind a pseudonym, or only offer those ideas and experiences that are sure to win approval.  And the chilling effect is not some accidental byproduct; it’s the goal.  This negates what, for me, is a large part of the promise of the Internet: that if people from all walks of life can just communicate openly, everything made common knowledge, nothing whispered or secondhand, then all the well-intentioned people will eventually come to understand each other.


If I’m right that online shaming of decent people is a real problem that’s only going to get worse, what’s the solution?  Let’s examine five possibilities.

(1) Libel law.  For generations, libel has been recognized as one of the rare types of speech that even a liberal, democratic society can legitimately censor (along with fraud, incitement to imminent violence, national secrets, child porn, and a few others).  That libel is illegal reflects a realistic understanding of the importance of reputation: if, for example, CNN falsely reports that you raped your children, then it doesn’t really matter if MSNBC later corrects the record; your life as you knew it is done.

The trouble is, it’s not clear how to apply libel law in the age of social media.  In the cases we’re talking about, an innocent person’s life gets ruined because of the collective effect of thousands of people piling on to make nasty comments, and it’s neither possible nor desirable to prosecute all of them.  Furthermore, in many cases the problem is not that the shamers said anything untrue: rather, it’s that they “merely” took something true and spitefully misunderstood it, or blew it wildly, viciously, astronomically out of proportion.  I don’t see any legal remedies here.

(2) “Shame the shamers.”  Some people will say the only answer is to hit the shamers with their own weapons.  If an overzealous activist gets an innocent jokester fired from his job, shame the activist until she’s fired from her job.  If vigilantes post the jokester’s home address on the Internet with crosshairs overlaid, find the vigilantes’ home addresses and post those.  It probably won’t surprise many people that I’m not a fan of this solution.  For it only exacerbates the real problem: that of mob justice overwhelming reasoned debate.  The most I can say in favor of vigilantism is this: you probably don’t get to complain about online shaming, if what you’re being shamed for is itself a shaming campaign that you prosecuted against a specific person.

(In a decade writing this blog, I can think of exactly one case where I engaged in what might be called a shaming campaign: namely, against the Bell’s inequality denier Joy Christian.  Christian had provoked me over six years, not merely by being forehead-bangingly wrong about Bell’s theorem, but by insulting me and others when we tried to reason with him, and by demanding prize money from me because he had ‘proved’ that quantum computing was a fraud.  Despite that, I still regret the shaming aspects of my Joy Christian posts, and will strive not to repeat them.)

(3) Technological solutions.  We could try to change the functioning of the Internet, to make it harder to use it to ruin people’s lives.  This, more-or-less, is what the European Court of Justice was going for, with its much-discussed recent ruling upholding a “right to be forgotten” (more precisely, a right for individuals to petition for embarrassing information about them to be de-listed from search engines).  Alas, I fear that the Streisand effect, the Internet’s eternal memory, and the existence of different countries with different legal systems will forever make a mockery of all such technological solutions.  But, OK, given that Google is constantly tweaking its ranking algorithms anyway, maybe it could give less weight to cruel attacks against non-public-figures?  Or more weight (or even special placement) to sites explaining how the individual was cleared of the accusations?  There might be scope for such things, but I have the strong feeling that they should be done, if at all, on a voluntary basis.

(4) Self-censorship.  We could simply train people not to express any views online that might jeopardize their lives or careers, or at any rate, not to express those views under their real names.  Many people I’ve talked to seem to favor this solution, but I can’t get behind it.  For it effectively cedes to the most militant activists the right to decide what is or isn’t acceptable online discourse.  It tells them that they can use social shame as a weapon to get what they want.  When women are ridiculed for sharing stories of anorexia or being sexually assaulted or being discouraged from careers in science, it’s reprehensible to say that the solution is to teach those women to shut up about it.  I not only agree with that but go further: privacy is sometimes important, but is also an overrated value.  The respect that one rational person affords another for openly sharing the truth (or his or her understanding of the truth), in a spirit of sympathy and goodwill, is a higher value than privacy.  And the Internet’s ability to foster that respect (sometimes!) is worth defending.

(5) Standing up.  And so we come to the only solution that I can wholeheartedly stand behind.  This is for people who abhor shaming campaigns to speak out, loudly, for those who are unfairly shamed.

At the nadir of my own Twitter episode, when it felt like my life was now finished, throw in the towel, the psychiatrist Scott Alexander wrote a 10,000-word essay in my defense, which also ranged controversially into numerous other issues.  In a comment on his girlfriend Ozy’s blog, Alexander now says that he regrets aspects of Untitled (then again, it was already tagged “Things I Will Regret Writing” when he posted it!).  In particular, he now feels that the piece was too broad in its critique of feminism.  However, he then explains as follows what motivated him to write it:

Scott Aaronson is one of the nicest and most decent people in the world, who does nothing but try to expand human knowledge and support and mentor other people working on the same in a bunch of incredible ways. After a lot of prompting he exposed his deepest personal insecurities, something I as a psychiatrist have to really respect. Amanda Marcotte tried to use that to make mincemeat of him, casually, as if destroying him was barely worth her time. She did it on a site where she gets more pageviews than he ever will, among people who don’t know him, and probably stained his reputation among nonphysicists permanently. I know I have weird moral intuitions, but this is about as close to pure evil punching pure good in the face just because it can as I’ve ever seen in my life. It made me physically ill, and I mentioned the comments of the post that I lost a couple pounds pacing back and forth and shaking and not sleeping after I read it. That was the place I was writing from. And it was part of what seemed to me to be an obvious trend, and although “feminists vs. nerds” is a really crude way of framing it, I couldn’t think of a better one in that mental state and I couldn’t let it pass.

I had three reactions on reading this.  First, if there is a Scott in this discussion who’s “pure good,” then it’s not I.  Second, maybe the ultimate solution to the problem of online shaming mobs is to make a thousand copies of Alexander, and give each one a laptop with an Internet connection.  But third, as long as we have only one of him, the rest of us have a lot of work cut out for us.  I know, without having to ask, that the only real way I can thank Alexander for coming to my defense, is to use this blog to defend other people (anywhere on the ideological spectrum) who are attacked online for sharing in a spirit of honesty and goodwill.  So if you encounter such a person, let me know—I’d much prefer that to letting me know about the latest attempt to solve NP-complete problems in polynomial time with some analog contraption.


Unrelated Update: Since I started this post with Boaz Barak, let me also point to his recent blog post on why theoretical computer scientists care so much about asymptotics, despite understanding full well that the constants can overwhelm them in practice.  Boaz articulates something that I’ve tried to say many times, but he’s crisper and more eloquent.


Update (Feb. 27): Since a couple people asked, I explain here what I see as the basic problems with the “Dr. Nerdlove” site.


Update (Feb. 28): In the middle of this affair, perhaps the one thing that depressed me the most was Salon‘s “Plight of the bitter nerd” headline. Random idiots on the Internet were one thing, but how could a “serious,” “respectable” magazine lend its legitimacy to such casual meanness? I’ve now figured out the answer: I used to read Salon sometimes in the late 90s and early 2000s, but not since then, and I simply hadn’t appreciated how far the magazine had descended into clickbait trash. There’s an amusing fake Salon Twitter account that skewers the magazine with made-up headlines (“Ten signs your cat might be racist” / “Nerd supremacism: should we have affirmative action to get cool people into engineering?”), mixed with actual Salon headlines, in such a way that it would be difficult to tell many of them apart were they not marked. (Indeed, someone should write a web app where you get quizzed to see how well you can distinguish them.) “The plight of the bitter nerd” is offered there as one of the real headlines that’s indistinguishable from the parodies.

“The Man Who Tried to Redeem the World with Logic”

February 18th, 2015

No, I’m not talking about me!

Check out an amazing Nautilus article of that title by Amanda Gefter, a fine science writer of my acquaintance.  The article tells the story of Walter Pitts, who [spoiler alert] grew up on the mean streets of Prohibition-era Detroit, discovered Russell and Whitehead’s Principia Mathematica in the library at age 12 while hiding from bullies, corresponded with Russell about errors he’d found in the Principia, then ran away from home at age 15, co-invented neural networks with Warren McCulloch in 1943, became the protégé of Norbert Wiener at MIT, was disowned by Wiener because Wiener’s wife concocted a lie that Pitts and others who she hated had seduced Wiener’s daughter, and then became depressed and drank himself to death.  Interested yet?  It’s not often that I encounter a piece of nerd history that’s important and riveting and that had been totally unknown to me; this is one of the times.

Update (Feb. 19): Also in Nautilus, you can check out a fun interview with me.

Update (Feb. 24): In loosely-related news, check out a riveting profile of Geoffrey Hinton (and more generally, of deep learning, a.k.a. re-branded neural networks) in the Chronicle of Higher Education.  I had the pleasure of meeting Hinton when he visited MIT a few months ago; he struck me as an extraordinary person.  Hat tip to commenter Chris W.

Memrefuting

February 11th, 2015

(in which I bring this blog back to the “safe, uncontroversial” territory of arguing with people who think they can solve NP-complete problems in polynomial time)

A few people have asked my opinion about “memcomputing”: a computing paradigm that’s being advertised, by its developers, as a way to solve NP-complete problems in polynomial time.  According to the paper Memcomputing NP-complete problems in polynomial time using polynomial resources and collective states, memcomputing “is based on the brain-like notion that one can process and store information within the same units (memprocessors) by means of their mutual interactions.”  The authors are explicit that, in their view, this idea allows the Subset Sum problem to be solved with polynomial resources, by exploring all 2n possible subsets in parallel, and that this refutes the Extended Church-Turing Thesis.  They’ve actually built ‘memcomputers’ that solve small instances of Subset Sum, and they hope to scale them up, though they mention hardware limitations that have made doing so difficult—more about that later.

A bunch of people (on Hacker News, Reddit, and elsewhere) tried to explain the problems with the Subset Sum claim when the above preprint was posted to the arXiv last year.  However, an overlapping set of authors has now simply repeated the claim, unmodified, in a feature article in this month’s Scientific American.  Unfortunately the SciAm article is behind a paywall, but here’s the relevant passage:

Memcomputing really shows advantages when applied to one of the most difficult types of problems we know of in computer science: calculating all the properties of a large series of integers. This is the kind of challenge a computer faces when trying to decipher complex codes. For instance, give the computer 100 integers and then ask it to find at least one subset that adds up to zero. The computer would have to check all possible subsets and then sum all numbers in each subset. It would plow through each possible combination, one by one, which is an exponentially huge increase in processing time. If checking 10 integers took one second, 100 integers would take 1027 seconds—millions of trillions of years … [in contrast,] a memcomputer can calculate all subsets and sums in just one step, in true parallel fashion, because it does not have to shuttle them back and forth to a processor (or several processors) in a series of sequential steps. The single-step approach would take just a single second.

For those tuning in from home: in the Subset Sum problem, we’re given n integers a1,…,an, and we want to know whether there exists a subset of them that sums to a target integer k.  (To avoid trivializing the problem, either k should be nonzero or else the subset should be required to be nonempty, a mistake in the passage quoted above.)

To solve Subset Sum in polynomial time, the basic idea of “memcomputing” is to generate waves at frequencies that encode the sums of all possible subsets of ai‘s, and then measure the resulting signal to see if there’s a frequency there that corresponds to k.

Alas, there’s a clear scalability problem that seems to me to completely kill this proposal, as a practical way of solving NP-complete problems.  The problem is that the signal being measured is (in principle!) a sum of waves of exponentially many different frequencies.  By measuring this wave and taking a Fourier transform, one will not be able to make out the individual frequencies until one has monitored the signal for an exponential amount of time.  There are actually two issues here:

(1) Even if there were just a single frequency, measuring the frequency to exponential precision will take exponential time. This can be easily seen by contemplating even a moderately large n.  Thus, suppose n=1000.  Then we would need to measure a frequency to a precision of one part in ~21000. If the lowest frequency were (say) 1Hz, then we would be trying to distinguish frequencies that differ by far less than the Planck scale.  But distinguishing frequencies that close would require so much energy that one would exceed the Schwarzschild limit and create a black hole!  The alternative is to make the lowest frequency slower than the lifetime of the universe, causing an exponential blowup in the amount of time we need to run the experiment.

(2) Because there are exponentially many frequencies, the amplitude of each frequency will get attenuated by an exponential amount.  Again, suppose that n=1000, so that we’re talking about attenuation by a ~2-1000 factor.  Then given any amount of input radiation that could be gathered in physical universe, the expected amount of amplitude on each frequency would correspond to a microscopically small fraction of 1 photon — so again, it would take exponential time for us to notice any radiation at all on the frequency that interests us (unless we used an insensitive test that was liable to confuse that frequency with many other nearby frequencies).

What do the authors have to say about these issues?  Here are the key passages from the above-mentioned paper:

all frequencies involved in the collective state (1) are dampened by the factor 2-n.  In the case of the ideal machine, i.e., a noiseless machine, this would not represent an issue because no information is lost.  On the contrary, when noise is accounted for, the exponential factor represents the hardest limitation of the experimentally fabricated machine, which we reiterate is a technological limit for this particular realization of a memcomputing machine but not for all of them …

In conclusion we have demonstrated experimentally a deterministic memcomputing machine that is able to solve an NP-complete problem in polynomial time (actually in one step) using only polynomial resources.  The actual machine we built clearly suffers from technological limitations due to unavoidable noise that impair [sic] the scalability.  This issue can, however, be overcome in other UMMs [universal memcomputing machines] using other ways to encode such information.

The trouble is that no other way to encode such information is ever mentioned.  And that’s not an accident: as explained above, when n becomes even moderately large, this is no longer a hardware issue; it’s a fundamental physics issue.

It’s important to realize that the idea of solving NP-complete problems in polynomial time using an analog device is far from new: computer scientists discussed such ideas extensively in the 1960s and 1970s.  Indeed, the whole point of my NP-complete Problems and Physical Reality paper was to survey the history of such attempts, and (hopefully!) to serve as a prophylactic against people making more such attempts without understanding the history.  For computer scientists ultimately came to realize that all proposals along these lines simply “smuggle the exponentiality” somewhere that isn’t being explicitly considered, exactly like all proposals for perpetual-motion machines smuggle the entropy increase somewhere that isn’t being explicitly considered.  The problem isn’t a practical one; it’s one of principle.  And I find it unfortunate that the recent memcomputing papers show no awareness of this story.

(Incidentally, quantum computing is interesting precisely because, out of all “post-Extended-Church-Turing” computing proposals, it’s the only one for which we can’t articulate a clear physical reason why it won’t scale, analogous to the reasons given above for memcomputing.  With quantum computing the tables are turned, with the skeptics forced to handwave about present-day practicalities, while the proponents wield the sharp steel of accepted physical law.  But as readers of this blog well know, quantum computing doesn’t seem to promise the polynomial-time solution of NP-complete problems, only of more specialized problems.)

Quantum Machine Learning Algorithms: Read the Fine Print

February 2nd, 2015

So, I’ve written a 4-page essay of that title, which examines the recent spate of quantum algorithms for clustering, classification, support vector machines, and other “Big Data” problems that grew out of a 2008 breakthrough on solving linear systems by Harrow, Hassidim, and Lloyd, as well as the challenges in applying these algorithms to get genuine exponential speedups over the best classical algorithms.  An edited version of the essay will be published as a Commentary in Nature Physics.  Thanks so much to Iulia Georgescu at Nature for suggesting that I write this.

Update (April 4, 2015): The piece has now been published.