Archive for September, 2007

On Deutsches, Shors, and Vaziranis

Friday, September 28th, 2007

My friend Robin Hanson kvetches that scientific contrarians don’t get no respect: even if they ultimately turn out to be right, it’s the more cautious, Johnny-come-lately “conservative radicals” who get the lion’s share of the credit. (Like many of Robin’s posts, this one is written in a purely descriptive mode that nevertheless leaves no doubt where his sympathies lie.) And so, as a firmly-entrenched pillar of the hidebound scientific establishment, I thought I’d tell you our side of the story.

In the US, there are companies whose business it is to patent (or buy patents for) every obvious idea they can think of, then sit around and do nothing with those ideas, wait for some other company to build a successful business around one of them, and sue that company for patent infringement. (The textbook example is NTP’s lawsuit against Research In Motion.)

In science, one occasionally sees the intellectual equivalent of these patent-holding companies: people who publish one flaky idea after another with no data or calculations to back them up; and then if, after years of painstaking research, one of their speculations turns out to be right (or even 10% right), scream “they stole my idea!”

But in my experience — and happily for all concerned — the truth usually lies between this extreme and the opposite extreme described in Robin’s post. To illustrate, let’s consider the example of Robin’s that I know best: that of David Deutsch and quantum computing.

Unlike the patent-holding firms, David Deutsch really was a scientific pioneer, thinking deeply about quantum physics and the Church-Turing Thesis back when basically no one else was. His philosophical insights led him to define the quantum Turing machine model, prove its universality, and realize it might have implications for complexity theory. But his one concrete example of a quantum algorithm — how shall I say? — sucked. In particular, he gave an algorithm to compute the XOR of two bits (and know one has done so) using one quantum query and with success probability 1/2. (Later it was realized that success probability 1 is achievable, but that’s still only a factor-2 speedup compared to classical computers.) If this was all you’d seen of quantum computing, you would rightly file it away with dozens of other promising ideas that hadn’t led anywhere.

Unless, that is, you were Ethan Bernstein and Umesh Vazirani. These “conservative radicals” from Berkeley decided to put quantum computing under the microscope of theoretical computer science. The result of their labor — besides a bounteous harvest of complexity theorems like BPP ⊆ BQP ⊆ P#P — was the first example of a black-box problem for which quantum computers gave a superpolynomial speedup over classical randomized ones. Shortly afterward, another conservative, Dan Simon, set out to prove that the speedup of quantum computing was illusory — and ended up with strong evidence (now called Simon’s algorithm) for exactly the opposite conclusion. A year later, yet another conservative — an expert on combinatorics and discrete geometry by the name of Peter Shor — took a close look at Simon’s algorithm, and realized that if you changed the underlying group from (Z2)n to the cyclic group ZN, then you could efficiently compute the period of a black-box function, and thereby factor integers, and thereby break the RSA cryptosystem, and thereby change the world.

A Hansonian might downplay these later achievements — arguing that, were it not for Shor, some other “mainstream mathematician” (a strange description of him!) would’ve sooner or later discovered the factoring algorithm. But it’s equally true that, were it not for Deutsch, some other “renegade physicist” would have come up with quantum Turing machines (and indeed Feynman and Benioff were close). My own judgment is that Deutsch and Shor both made creative scientific contributions of the highest order, and are both deservedly celebrated for them. Indeed, if anyone gets short-shrifted in the usual popular accounts, I think it’s the people in between — like Bernstein, Vazirani, and Simon.

So yes, let’s remember the first person who struck gold, but also the first to realize it wasn’t fools’ gold and the first to figure out how to mine it. Science is a big place; there’s plenty of room for Deutsches, Shors, and even a Vazirani or two.

On drugs, mammoths, and Mahmoud

Wednesday, September 26th, 2007

I was, of course, delighted that Columbia University invited my good friend Mahmoud to speak there, and dismayed only by the tedious introduction by President Lee Bollinger. (“Having demonstrated conclusively that today’s featured speaker is a murderous tyrant with no more right to partake in the civilized world than Genghis Khan or Attila the Hun, let me now, without further ado…”) However long your speaker’s list of achievements, crimes against humanity, etc. might be, I think talk introductions should be two minutes tops.

But since this particular event has already been covered on more blogs than the Monster has subgroups, today I thought I’d roll out an occasional new Shtetl-Optimized feature — in which, for want of anything better to blog about, I discuss some books I’ve read recently.

The Truth About The Drug Companies: How They Deceive Us and What To Do About It by Marcia Angell.

Like many in the US, I once “knew” that drug companies have to charge such absurd prices here because otherwise they wouldn’t be able to fund their R&D. This book reveals the hilarious truth about what drug company R&D actually consists of. My favorite examples: coloring Prozac pink instead of green, marketing it for “premenstrual dysphoric disorder” instead of depression, and charging three times as much for it. Inventing new drugs for high blood pressure that are less effective than diuretics available since the 1950’s, but have the advantage of being patentable. Proving in clinical trials that a new drug works better than an old one, as long as you compare 40mg of the one to 20mg of the other.

The book paints a picture of the pharmaceutical industry as, basically, an organized crime syndicate that’s been successful in co-opting the government. It trumpets the free market but depends almost entirely for its existence on bad patent laws that it helped write; it bribes doctors to prescribe worse expensive drugs instead of better cheap ones; it waits for government-funded university researchers to discover new drugs, then bottles them up, makes billions of dollars, and demands credit for its life-saving innovations.

Among the arguments put forward by the rare negative reviewers of this book on Amazon, the following was my favorite (I’ll let you supply a counterargument):

Who do you folks think are paid higher, scientists in the Unis and government programs, or scientists in the industry? … Marcia saying the Universities and the NIH are more innovative in developing drugs than the Pharma Industry is like saying (using sports analogy) Minor League baseball is better than the MLB. Which players do you think are paid more? Common sense my friends.

The World Without Us by Alan Weisman.

This book has received a lot of attention lately, and deserves all of it. The topic is: if humans disappeared tomorrow, how long would it take for the world’s forests and coral reefs to regenerate, garbage to decompose, excess CO2 to wash out of the sky, giant land mammals to reappear in North America, etc.? Of course this is just a different way of asking: “exactly how badly have humans screwed up the planet?” Weisman’s key insight, though, is that it’s less depressing to read about the world regenerating itself than about its being destroyed.

It’s hard to identify a clear thesis in this book, just lots of interesting observations: for example, that African elephants weren’t hunted to extinction whereas woolly mammoths probably were because only the former evolved to fear humans; and that, if North and South Korea ever reunite, it will be a disaster for the dozens of endangered species that now survive only in a four-mile-wide demilitarized strip between the two. The prose is beautiful throughout, and sometimes reaches heights rarely seen in environmental writing. After explaining the role of volcanoes in climate change, Weisman says: “the problem is, by tapping the Carboniferous Formation and spewing it up into the sky, we’ve become a volcano that hasn’t stopped erupting since the 1700s.”

Barriers to proving P!=NP and moving this blog

Sunday, September 16th, 2007

Thanks, everyone, for your patience, and your numerous complaints about the Technology Review site! Currently, the folks at TR say they can do all the minor things people asked for, like adding permalinks to the titles and letting people include their URL’s with their comments. On the other hand, they can’t make it so you can post comments without logging in, and they can’t decrease the size of the ad bar. (I suggested that they at least turn my sidebar into drop-down menus, thereby increasing the screen width available for the entries; they said they’d look into that.) Also, they can’t provide the full text in RSS (since God forbid, that might let people read the blog without seeing ads), although they can give the first 150 words or so.

As you can imagine, TR’s response has put me in a difficult position. From their perspective, they’ve been bending over backwards to accommodate me; from my perspective (and I gather from most readers’), their offer still falls short of acceptable. When I originally agreed to let them host me, I imagined that the blog would look just as it does now, with maybe a few unobtrusive ads here or there. I didn’t even think to ask about the RSS feed or the screen width available for entries.

And so, after weeks of introspection (well, mostly being tied up with other work), I’ve reached a decision: I will continue to host my blog right here, on Bluehost, until TR comes up with something that both parties can live with. I like the TR people and appreciate their interest, but I’m not in any particular hurry to move, especially if it means crippling this blog so that no will read it. It’s true that Bluehost sucks, and that I no longer have time to be a webmaster — but once I get grant money, maybe I can pay someone to take care of these things for me.

Finally, since all this self-referentiality gets tiresome, here are the PowerPoint slides for a talk I gave at MIT last week, about recent joint work with Avi Wigderson on a new barrier to proving P≠NP. (Note: The day before the talk, PowerPoint trashed my file, and I had to recreate the entire presentation from memory. Always make backup copies! Excellent advice, in my opinion.)

Abstract:

Algebrization: A New Barrier in Complexity Theory

Any proof of P≠NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (for example, that PP does not have linear-size circuits) that overcome both barriers simultaneously. So the question arises of whether there is a third barrier to progress on the central questions in complexity theory.

In this talk we present such a barrier, which we call “algebraic relativization” or “algebrization.” The idea is that, when we relativize some complexity class inclusion, we should give the simulating machine access not only to an oracle A, but also to the low-degree extension of A over a finite field or ring.

We systematically go through basic results and open problems in complexity theory to delineate the power of the new algebrization barrier. We first show that all known non-relativizing results — both inclusions such as IP=PSPACE and MIP=NEXP, and separations such as MAEXP⊄P/poly — do indeed algebrize. We next show that most open problems — including P versus NP, P versus BPP, and NEXP versus P/poly — will require non-algebrizing techniques, of which we currently have not a single example. In some cases algebrization seems to explain exactly why progress stopped where it did: for example, why we have superlinear circuit lower bounds for PromiseMA but not for NP.

We also exhibit a surprising connection between algebrization and communication complexity. Using this connection, we give an MA-protocol for the Inner Product function with O(√n log(n)) communication (essentially matching a lower bound of Klauck), and describe a pure communication complexity conjecture whose truth would imply P≠NP.

Comments welcome. We’ll hopefully have a writeup soon.

Sellin’ out to the Man

Monday, September 3rd, 2007

Alright, it’s time to tell you: in a couple of days, Shtetl-Optimized will cease its two-year independent existence, and become a part of MIT Technology Review. Please check out the new shtetl at technologyreview.com and let me know in the comments section, here or there, if anything is amiss. (Note: You have to register at technologyreview.com before you can post a comment there, but that should be pretty quick and painless.) If everything’s OK, then we’ll start redirecting the scottaaronson.com URL’s to point to the new location.

Naturally, selling out to an MIT subsidiary is not a step I took lightly. The following considerations are what finally induced me to say “yes”:

  • I’d already sold my soul to MIT, so why not my blog too?
  • As explained earlier, Bluehost (my current hosting provider) sucks: the sites they host routinely stop working, and when they do it’s always your fault and never theirs. Indeed, every webhosting company I’ve dealt with strikes me as basically a scam operation that does a tiny bit of hosting on the side. So when TR told me that they would be that at which the buck stops — and that if anything went wrong I could walk the two blocks to their East Cambridge office and yell at them in person — their pitch fell on receptive ears.
  • From now on, TR’s expert staff will manage all technical aspects of the blog for me, leaving me free to concentrate on deeper, biting-vagina-related matters. This will be particularly welcome as the demands on my time shift from the “severe” to “ludicrous” range.
  • “The Benjamins.” As explained earlier, as a matter of principle I accept bribes and kickbacks from absolutely anyone, trusting that the money from competing groups will cancel each other out, thereby leaving my overall judgment unbiased. Plus I can actually use the dough, now that I have a mortgage to pay.
  • I’ll now be under contractual obligation to blog “at least twice a week on average.” I actually welcome this change, since it’s the only remedy I can think of for the blog-procrastination (i.e., work) that’s often afflicted me in the past.
  • If this experiment doesn’t work, I’m allowed to back out on two weeks’ notice, retaining all the “rights” to my blog. Of course, I hope and expect that it’ll work.
  • Most importantly, Jason Pontin, the editor-in-chief and publisher of Technology Review, has personally assured me that I will have complete intellectual freedom to blog about anything I want, exactly as I did when the blog was independent. You can rest assured that Jason will come to regret his guarantee in the days and weeks ahead. (TR does have a policy of fact-checking blog entries, but as I explained to them, the very concept of “fact-checking” is not particularly relevant to Shtetl-Optimized.)

Indeed, the only real disadvantage I could see to hosting the blog on TR was the amount of screen space taken up by ads. Sorry about that! Fortunately, the ads look pretty ignorable to me.

Black-hole complexity theory: from joke to science

Monday, September 3rd, 2007

One of the most fun arXiv preprints I’ve read in a while:

Black holes as mirrors: quantum information in random subsystems
by Patrick Hayden and John Preskill

Abstract: We study information retrieval from evaporating black holes, assuming that the internal dynamics of a black hole is unitary and rapidly mixing, and assuming that the retriever has unlimited control over the emitted Hawking radiation. If the evaporation of the black hole has already proceeded past the “half-way” point, where half of the initial entropy has been radiated away, then additional quantum information deposited in the black hole is revealed in the Hawking radiation very rapidly. Information deposited prior to the half-way point remains concealed until the half-way point, and then emerges quickly. These conclusions hold because typical local quantum circuits are efficient encoders for quantum error-correcting codes that nearly achieve the capacity of the quantum erasure channel. Our estimate of a black hole’s information retention time, based on speculative dynamical assumptions, is just barely compatible with the black hole complementarity hypothesis.

Many of this paper’s arguments depend on speculative assumptions about quantum gravity, and might very well be wrong. What’s nice is simply that they’re not not even wrong! This is an 18-page paper about the Planck-scale dynamics of black hole horizons where I never once found myself wondering what the authors were trying to say, or how their ideas would in principle be tested.

When I used to spout off about the complexity of retrieving information from a black hole as a function of the Schwarzschild radius rS, people assumed I was just fishing for laughs. And they were basically right. But even then, I felt sure that actual physicists would eventually say something real about this question, unapologetically using the language of quantum computing and information. Like Alice with her k qubits, I didn’t have to wait as long as I’d thought.

My take on the Koblitz affair

Saturday, September 1st, 2007

Now that Luca, Michael Mitzenmacher, Jonathan Katz, and Oded Goldreich have all weighed in on Neal Koblitz’s critique of modern cryptography in the Notices of the AMS, I can no longer bear to be left out of the action.

My reaction is simple: we computer scientists should feel honored that the mathematicians have finally bestowed on us the level of contempt they once reserved for the physicists.

Update (9/6): If you want to understand what’s actually involved in this controversy, the best starting point I’ve found is this paper by Ivan Damgård.