Archive for the ‘Embarrassing Myself’ Category

The Generalized Linial-Nisan Conjecture is false

Sunday, July 11th, 2010

In a post a year and a half ago, I offered a prize of $200 for proving something called the Generalized Linial-Nisan Conjecture, which basically said that almost k-wise independent distributions fool AC0 circuits.  (Go over to that post if you want to know what that means and why I cared about it.)

Well, I’m pleased to report that that’s a particular $200 I’ll never have to pay.  I just uploaded a new preprint to ECCC, entitled A Counterexample to the Generalized Linial-Nisan Conjecture.  (That’s the great thing about research: no matter what happens, you get a paper out of it.)

A couple friends commented that it was wise to name the ill-fated conjecture after other people rather than myself.  (Then again, who the hell names a conjecture after themselves?)

If you don’t feel like downloading the ECCC preprint, but do feel like scrolling down, here’s the abstract (with a few links inserted):

In earlier work, we gave an oracle separating the relational versions of BQP and the polynomial hierarchy, and showed that an oracle separating the decision versions would follow from what we called the Generalized Linial-Nisan (GLN) Conjecture: that “almost k-wise independent” distributions are indistinguishable from the uniform distribution by constant-depth circuits. The original Linial-Nisan Conjecture was recently proved by Braverman; we offered a $200 prize for the generalized version. In this paper, we save ourselves $200 by showing that the GLN Conjecture is false, at least for circuits of depth 3 and higher.
As a byproduct, our counterexample also implies that Π2p⊄PNP relative to a random oracle with probability 1. It has been conjectured since the 1980s that PH is infinite relative to a random oracle, but the best previous result was NP≠coNP relative to a random oracle.
Finally, our counterexample implies that the famous results of Linial, Mansour, and Nisan, on the structure of AC0 functions, cannot be improved in several interesting respects.

To dispel any confusion, the $200 prize still stands for the original problem that the GLN Conjecture was meant to solve: namely, giving an oracle relative to which BQP is not in PH.  As I say in the paper, I remain optimistic about the prospects for solving that problem by a different approach, such as an elegant one recently proposed by Bill Fefferman and Chris Umans.  Also, it’s still possible that the GLN Conjecture is true for depth-two AC0 circuits (i.e., DNF formulas).  If so, that would imply the existence of an oracle relative to which BQP is not in AM—already a 17-year-old open problem—and net a respectable $100.

BQP Aarlines

Monday, November 2nd, 2009

The Onion has a new piece—United Airlines Exploring Viability of Stacking Them Like Cordwood—that, as usual, is grossly unrealistic.  If my own experience is any guide, the real United would never waste money on a grated floor for waste disposal, or people to shovel peanuts into a trough.

But The Onion‘s exploration of the geometry of passenger-packing does raise some genuinely interesting questions.  For years, I’ve had this idea to start an airline where, instead of seats, passengers would get personal cubbyholes that were stacked on top of each other like bunk beds.  (I’d make sure the marketing materials didn’t describe them as “coffin-shaped,” though that’s what they would be.)

You could sleep in your cubbyhole—much more easily than in a seat, of course—but you could also read, watch a movie, work on your laptop, or eat (all activities that I don’t mind doing while lying down, and the first two of which I prefer to do lying down).

Besides passenger comfort, my arrangement would have at least two advantages over the standard one:

First, depending on the exact size of the cubbyholes, you could very likely fit more passengers this way, thereby lowering ticket costs.

Second, assuming the cubbyholes were ventilated, you could put little doors on them, thereby giving passengers far more privacy than in a conventional airline.  No more being immiserated by screaming babies or inane conversations, or the B.O. of the person next to you, or reading lights while you’re trying to sleep.  And, as many of you will have noticed, BQP Aarlines could provide amorous couples with a far more comfortable alternative than the bathroom.

So, readers: do you know if any airline has tried something like this?  If not, why not?  Are there strong arguments against it that I haven’t thought of, besides the obvious cultural/psychological ones?  Should I keep my day job?

My diavlog with Eliezer Yudkowsky

Monday, August 17th, 2009

Here it is.  It’s mostly about the Singularity and the Many-Worlds Interpretation.

(I apologize if Eliezer and I agreed too much, and also apologize for not quite realizing that the sun was going to set while I was speaking.)

And here’s the discussion that already took place over at Eliezer’s blogging-grounds, Less Wrong.

Essentials of complexity-theoretic stand-up comedy

Monday, July 13th, 2009

Recently someone asked me how to give funnier talks.  My first response was to recoil at such an insolent question: doesn’t everyone know that at the core of my shtick lies a unique and ineffable je ne sais quoi that can’t be packaged, bottled, or resold?  But the truth was not that I couldn’t give advice; it’s that I didn’t want to.  For if everyone knew how easy it was to keep an audience at least half-awake, how would people like me maintain their edge?  By proving better theorems?  Having something new and relevant and say?  These questions answer themselves.

But because I love you, my readers, so deeply, and because I feel guilty about abandoning you for so long, I shall now publicly deconstruct the main ingredients of seminar humor, insofar as I’ve been able to find them.  (A few ingredients are specific to theoretical computer science, but most are more general.)

  1. Make fun of people in the audience.  (Of course, you have to do it in such a way that they’re flattered you’re ripping them and not someone else.)
  2. Ridicule bogus claims related to your topic, particularly claims that received wide currency in the popular press.  (To be honest, I do this not so much because it gets laughs—though it does—but as a small service to humanity.  If I can make one budding crackpot think twice before hitting “Submit” on a disproof of Bell’s Theorem, I will not have lived in vain.  Of course, the ridicule should always focus more on ideas than people; and even then, a few in the audience will frown on it, considering it unscientific or unprofessional.  Forty or fifty crackpots ago, I agreed with them.  It’s only experience that hardened me into a vigilante.)
  3. Incorporate the audience’s shared experiences into your talk (without making a big deal of it, as if it’s the most natural thing in the world).  For example, when it comes time to trot out an Alice/Bob scenario, have yours wryly comment on a previous talk, an excursion everyone went on, a current event (like an election) that everyone actually cares about more than the talk…
  4. Self-deprecate.  (“My first conjecture was falsified.  The following conjecture hasn’t yet been falsified, and is obviously true…”)
  5. Say things that recognize and comment on how neurotic the thought-process of theoretical computer scientists really is, by taking that thought-process to extremes.  (“That’s off by a factor of 1010^120, which is only O(1) and is therefore irrelevant.” “For years, people tried unsuccessfully to prove this sort of impossibility result was impossible.  Our result shows the impossibility of their goal.”)
  6. If your field is interdisciplinary, the humor potential is almost limitless.  Are you a physicist?  Ridicule the computer scientists.  A computer scientist?  Ridicule the mathematicians.  A mathematician?  Ridicule the economists.  Chances are, enough differences in notation, terminology, assumptions, and underlying goals will arise in the talk to give you a never-ending supply of material.  “Disciplinary humor” is a more refined, intellectual variant of ethnic humor, and is effective for the same reasons.
  7. Explain your results in an unusually vivid or graphic way.  (“If, at the moment of your death, your whole life flashed before you in an instant, and if while you were alive you’d performed suitable quantum computations on your own brain, then you could solve Graph Isomorphism in polynomial time.”)  This type of humor is my absolute favorite: on a plot with laughter volume on one axis and scientific content on the other, it’s way out on the upper-right-hand corner.
  8. If you’re using PowerPoint, take full advantage of its comic potential: wild animations, text that pops up on the screen to question or even flat-out contradict what you’re saying, a punchline at the bottom of the slide that only gets revealed when you press a key, etc.  I love doing this because I have as much time as I need to “precompute” jokes (though I’ll then often elaborate on them extemporaneously).
  9. Banter with the crowd: if someone makes a crack at your expense, always respond, and even escalate the interaction into a “staged fight” (the rest of the audience will love it).  If someone catches you in a mistake, or you don’t know the answer to a question, make a self-deprecating joke that acknowledges the situation even as it wins you sympathy points.
  10. Have high energy!  Loud, lots of moving around, emotion in your voice … like you can’t wait to invite everyone along to the most exciting journey in the history of the universe.  Not only is that good practice in general (at the least, it keeps the audience from falling asleep), it also creates a general atmosphere in which it’s okay to laugh at jokes.
  11. Pause a few beats before the punchline.  (You can get better at this by watching professional comics.)
  12. Experiment!  If a particular joke bombs, drop it from your rotation; if it brings the house down, recycle it in future talks.  Of course, you should drop a joke once it reaches its saturation point, where much of the audience has already heard it in previous talks.  On the other hand, if this particular audience hasn’t yet heard the joke, disregard your own internal sense of its being “tired”: it could go over just as well as the first time, or better.
  13. Steal ideas shamelessly from other speakers.  (I mean their humor techniques, not their results.)  Just as importantly, study the lame jokes other speakers use, so as to avoid them.  (For example, I estimate that 94% of quantum computing talks include a heavy-handed comment about someone or something being “in superposition”; this has not yet gotten a laugh.  Or the talks repeat stories about Feynman, Bohr, etc. that everyone in the audience has already heard a thousand times.)
  14. Tailor your jokes to the audience’s background.  For instance, I have some jokes that work great in the US, but sink in other countries.  Or work on physicists but not computer scientists, or vice versa.
  15. Make jokes about the country you’re visiting.  Of course, this is subject to common sense: I’ve been known to resort to “zed” / “aboot” jokes in Canada, scone / royalty / powdered wig jokes in England, and neutrality / yodeling jokes in Switzerland, but I usually don’t make the first joke that pops into my head when visiting Germany or Austria.
  16. Take risks!  Here’s an Umeshism: if some of your jokes don’t flop, then you’re not being bold enough.  Do things that people can’t believe anyone would actually do in a talk.  Most people seem to operate under the assumption that when they’re giving a talk, they have to be less funny than in regular conversation, when the truth is the opposite.  If something comes into your head that’s funny to you, and it passes the most flimsy and cursory of offensiveness checks … out with it, and worry later about the consequences!

Three final remarks.

First, reading over the list, I can’t help but feel sheepish about how much one can do with such a crude and obvious bag of tricks.

Second, I only wish I applied this crude bag more consistently!  Particularly when I have a new result and I’m excited about the proof, I all too often ignore my own advice and lapse into boringness.  But at least I notice I’m doing it, get annoyed at myself, and resolve to be crasser, less mature, and less professional the next time around.

Third, you might feel that adding shtick to your talks makes you “shallow,” that all that should matter is the content of your results.  In the relatively rare case where you’re addressing experts in your own sub-sub-subfield, that’s probably true: you can drop the funny business and get straight to the point.  In all other cases, I’m almost certain the audience will understand your results better if you incorporate some shtick than if you don’t.  But hey—it’s up to you whether you want to address an ideal Platonic audience (“more lemmas! no irrelevant distractions! yes! harder! faster!”) or the actual flesh-and-blood hairless apes who are dozing off in the seminar room while you speak.

Wanted: Quantum GGM theorem

Sunday, May 3rd, 2009

A commenter on my last post writes:

Dear Scott, Please keep the focus of your blog.  You have lately been losing science to your blog and started blogging about various loosely related things. One of the ways I subscribed to your blog was because your articles were very computation-oriented. Now you no longer keep the theme. And as you might have heard, shifting topics in your blog will lose your readers.

So today I noticed something bizarre.  A celebrated result in cryptography, due to Goldreich, Goldwasser, and Micali, states that any pseudorandom generator gives rise to a pseudorandom function family.  See Luca’s notes or the original GGM paper for more.

Now I’d always assumed, without thinking about it, that the GGM result “obviously” carries over to the quantum case—so that any pseudorandom generator secure against quantum attack would give rise to a pseudorandom function family secure against quantum attack.  But now that I’m writing a paper that actually relies on this “fact,” I realized I have no idea why it’s true.

Look: in the GGM argument, you start with a pseudorandom generator G:{0,1}n→{0,1}2n, and you apply it recursively to produce a family of functions fs:{0,1}n→{0,1}n, where s is the seed.  You then consider a hypothetical polynomial-time algorithm A that distinguished fs from a truly random function.  You show how you could use A to create a polynomial-time algorithm that distinguished the output of G from a truly random 2n-bit string—thereby contradicting the starting assumption that G was pseudorandom.

The trouble is, the argument relies crucially on the fact that A examines only a polynomial number of outputs of fs—intuitively so that you can run a hybrid argument, changing the outputs that A actually examines one by one into truly random strings.  But if A is a quantum algorithm, then (duh) it can examine all 2n outputs of fs in superposition!  So any argument that depends on “watching A to see which inputs it queries” is toast.

But maybe we can recover the same conclusion in a fancier way?  For at least seven years, I’ve been going around conjecturing the following:

Conjecture (:)): Let Q be a quantum algorithm that makes T queries to a Boolean input X∈{0,1}N.  Then for all ε,δ>0, there exists a deterministic classical algorithm that makes poly(T,1/ε,log(1/δ)) queries to X, and that approximates Q’s acceptance probability to within ε on a 1-δ fraction of inputs.

My motivation for Conjecture (:)) had nothing to do with cryptography.  I was interested in whether we could rule out the possibility that P=BQP relative to a random oracle with probability 1.  If Conjecture (:)) holds—and if the classical algorithm is anything like I think it is—then we can’t rule it out, at least not without proving PPSPACE or an even stronger separation in the unrelativized world.

It now occurs to me that, if we knew how to prove Conjecture (:)), then maybe we could push through a quantum GGM argument using similar ideas—that is, by identifying a tiny subset of inputs to fs that the quantum algorithm’s acceptance probability “really” depends on.  Alas, I have good reason to believe that Conjecture (:)) is hard.

So the task remains: prove a quantum GGM theorem.  Or maybe I’m missing something completely obvious?

PS. The promised report on the QIS conference in Virginia is coming tomorrow.  Take that, future self!

Update (5/3): An anonymous commenter points out that we can use a simpler hybrid argument of Razborov and Rudich—which doesn’t break down in the quantum case—to show that if there exists a PRG that’s secure against 2n^Ω(1)-time quantum adversaries, then there also exists a PRF with polynomial seed length that’s secure against exponential-time quantum adversaries.  That somehow hadn’t occurred to me, and it’s good enough for my purposes.  (Masked cryptographer: emerge ye from the shadows, and claim thy rightful honour in my Acknowledgments!)  On the other hand, the extremely interesting question still stands of whether one can prove a “strong,” GGM-style reduction: from PRGs secure against f(n)-time quantum adversaries to PRFs with linear seed length secure against f(n)Ω(1)-time quantum adversaries, for any superpolynomial f.

How long could a black hole remain in the center of the earth?

Sunday, December 21st, 2008

The above question came up in conversation with Michael Vassar and some other nerds in New York City yesterday (before I went with relatives to see Gimpel Tam, an extraordinarily dark and depressing musical performed entirely in Yiddish).  Look, I know a massive black hole would swallow the earth extremely quickly, and I also know that a microscopic black hole would quickly evaporate as Hawking radiation.  So suppose we chose one of intermediate size so as to maximize the earth’s survival time—how long a time could we achieve?  (Does the answer depend on the viscosity of the magma or whatever else is in the earth’s core?)  Sure, I could try to calculate an answer myself, but why bother when so many physicists read this blog?  Pencils out!

Volume 4 is already written (in our hearts)

Thursday, January 10th, 2008

Today is the 70th birthday of Donald E. Knuth: Priest of Programming, Titan of Typesetting, Monarch of MMIX, intellectual heir to Turing and von Neumann, greatest living computer scientist by almost-universal assent … alright, you get the idea.

That being the case, Jeff Shallit proposed to various CS bloggers that we should all band together and present the master with a birthday surprise: one post each about how his work has inspired us. The posts are now in! Readers who don’t know about Knuth’s work (are there any?) should start with this post from Luca. Then see this from David Eppstein, this from Doron Zeilberger, this from Jeff, this from Bill Gasarch, and this from Suresh.

Knuth’s impact on my own work and thinking, while vast, has not been directly through research: his main influence on my BibTeX file is that if not for him, I wouldn’t have a BibTeX file. (One reason is that I’m one of the people Doron Zeilberger attacks for ignoring constant factors, and supporting what he calls “the ruling paradigm in computational complexity theory, with its POL vs. EXP dichotomy.”) So I decided to leave Knuth’s scientific oeuvre to others, and to concentrate in this post on his contributions to two other fields: mathematical exposition and computational theology.

Knuth’s creation of the TeX typesetting system — his original motivation being to perfect the layout of his own Art of Computer Programming books — was remarkable in two ways. First, because scientific typesetting is of so little interest to industry, it’s not clear if something like TeX would ever have been invented if not for one man and his borderline-neurotic perfectionism. Second, TeX is one of the only instances I can think of when a complicated software problem was solved so well that it never had to be solved again (nor will it for many decades, one hazards to guess). At least in math, computer science, and physics, the adoption of TeX has been so universal that failure to use it is now a reliable crackpot indicator.

From Wikipedia:

Since version 3, TeX has used an idiosyncratic version numbering system, where updates have been indicated by adding an extra digit at the end of the decimal, so that the version number asymptotically approaches π. This is a reflection of the fact that TeX is now very stable, and only minor updates are anticipated. The current version of TeX is 3.141592; it was last updated in December 2002 … Even though Donald Knuth himself has suggested a few areas in which TeX could have been improved, he indicated that he firmly believes that having an unchanged system that will produce the same output now and in the future is more important than introducing new features. For this reason, he has stated that the “absolutely final change (to be made after my death)” will be to change the version number to π, at which point all remaining bugs will become features.

But Knuth’s interest in scientific exposition goes far beyond typesetting. His 1974 Surreal Numbers: How Two Ex-Students Turned on to Pure Mathematics and Found Total Happiness, which he wrote in one week, was weirdness at the highest possible level: the Beatles’ White Album of math. It’s said to represent the only occasion in history when a new mathematical theory (Conway’s theory of surreal numbers) was introduced in the form of a novel. (Though admittedly, with the exception of one sex scene, this is a “novel” whose plot development mostly takes the form of lemmas.)

Those seeking to improve their own writing should consult Mathematical Writing (available for free on the web), the lecture notes from a course at Stanford taught by Knuth, Tracy Larrabee, and Paul Roberts. Like a lot of Knuth’s work, Mathematical Writing has the refreshing feel of an open-ended conversation: we get to see Knuth interact with students, other teachers, and visiting luminaries like Mary-Claire van Leunen, Paul Halmos, Jeff Ullman, and Leslie Lamport.

Since I’ve blogged before about the battle over academic publishing, I also wanted to mention Knuth’s remarkable and characteristically methodical 2003 letter to the editorial board of the Journal of Algorithms. Knuth asks in a postscript that his letter not be distributed widely — but not surprisingly, it already has been.

In the rest of this post, I’d like to talk about Things A Computer Scientist Rarely Talks About, the only book of Knuth’s for which I collected one of his coveted $2.56 prizes for spotting an error. (Nothing important, just a typo.)

Things is based on a series of lectures on computer science and religion that Knuth gave in 1997 at MIT. (At the risk of oversimplifying: Knuth practices Christianity, but in a strange form less interested in guns and gays than in some business about “universal compassion.”) Perhaps like most readers, when I bought Things I expected yet another essay on “non-overlapping magisteria,” a famous scientist’s apologia justifying his belief in the Virgin Birth and the Resurrection. But Knuth likes to surprise, and what he delivers instead is mostly a meditation on the typography of Bible verses [sic]. More precisely, Things is a “metabook”: a book about the lessons Knuth learned while writing and typesetting an earlier book, one I haven’t yet read, that analyzed verse 3:16 of every book of the Bible.

But this being a lecture series, Knuth also fields questions from the audience about everything from sin and redemption to mathematical Platonism. He has a habit of parrying all the really difficult questions with humor; indeed, he does this so often one comes to suspect humor is his answer. As far as I could tell, there’s only one passage in the entire book where Knuth directly addresses what atheists are probably waiting for him to address. From one of the question periods:

Q: How did you become so interested in God and religion in the first place?

A: It was because of the family I was born into. If I had been born in other circumstances, my religious life would no doubt have been quite different. (p. 155)

And then on to the next question.

To me, what’s remarkable about this response is that Knuth without any hesitation concedes what skeptics from Xenophanes to Richard Dawkins have held up as the central embarrassment of religion. This, of course, is the near-perfect correlation between the content of religious belief and the upbringing of the believer. How, Dawkins is fond of asking, could there possibly be such a thing as a Christian or Hindu or Jewish child? How could a four-year-old already know what he or she thinks about profound questions of cosmogony, history, and ethics — unless, of course, the child were brainwashed by parents or teachers?

My Bayesian friends, like Robin Hanson, carry this argument a step further. For them, the very fact that Knuth knows his beliefs would be different were he born to different parents must, assuming he’s rational, force him to change his beliefs. For how can he believe something with any conviction, if he knows his belief was largely determined by a logically-irrelevant coin toss?

And yet, openly defying the armies of Bayes arrayed against him, here we have Knuth saying, in effect: yes, if I know that if I were some other person my beliefs would be different, but I’m not that other person; I’m Knuth.

So, readers: is Knuth’s response a cop-out, the understandable yet ultimately-indefensible defense of an otherwise-great scientist who never managed to free himself from certain childhood myths? Or is it a profound acknowledgment that none of us ever escape the circumstances of our birth, that we might as well own up to it, that tolerance ought not to require a shared prior, that the pursuit of science and other universal values can coexist with the personal and incommunicable?

Taking a cue from Knuth himself, I’m going to dodge this question. Instead, I decided to end this post by quoting some of my favorite passages from Chapter 6 of Things A Computer Scientist Rarely Talks About.

On computer science and God: “When I talk about computer science as a possible basis for insights about God, of course I’m not thinking about God as a super-smart intellect surrounded by large clusters of ultrafast Linux workstations and great search engines. That’s the user’s point of view.” (p. 168)

“I think it’s fair to say that many of today’s large computer programs rank among the most complex intellectual achievements of all time. They’re absolutely trivial by comparison with any of the works of God, but still they’re somehow closer to those works than anything else we know.” (p. 169)

On infinity: “Infinity is a red herring. I would be perfectly happy to give up immortality if I could only live Super K years before dying [‘Super K’ being defined similarly to an Ackermann number]. In fact, Super K nanoseconds would be enough.” (p. 172)

On the other hand: “I once thought, if I ever had to preach a sermon in church, I would try to explain Cantor’s theorem to my non-mathematical friends so that they could understand something about the infinite.” (p. 172)

On God and computational complexity: “I think it’s fair to say that God may well be bound by the laws of computational complexity … But I don’t recommend that theologians undertake a deep study of computational complexity (unless, of course, they really enjoy it). ” (p. 174)

On quantum mechanics: “Several years ago, I chanced to open Paul Dirac’s famous book on the subject and I was surprised to find out that Dirac was not only an extremely good writer but also that his book was not totally impossible to understand. The biggest surprise, however — actually a shock — was to learn that the things he talks about in that book were completely different from anything I had ever read in Scientific American or in any other popular account of the subject. Apparently when physicists talk to physicists, they talk about linear transformations of generalized Hilbert spaces over the complex numbers; observable quantities are eigenvalues and eigenfunctions of Hermitian linear operators. But when physicists talk to the general public they don’t dare mention such esoteric things, so they speak instead about particles and spins and such, which are much less than half the story. No wonder I could never really understand the popular articles.” (p. 181)

“The extra detail that gets suppressed when quantum mechanics gets popularized amounts to the fact that, according to quantum mechanics, the universe actually consists of much more data than could ever be observed.” (p. 182)

On free will and the problem of evil: “I can design a program that never crashes if I don’t give the user any options. And if I allow the user to choose from only a small number of options, limited to things that appear on a menu, I can be sure that nothing anomalous will happen, because each option can be foreseen in advance and its effects can be checked. But if I give the user the ability to write programs that will combine with my own program, all hell might break loose. (In this sense the users of Emacs have much more free will than the users of Microsoft Word.) … I suppose we could even regard Figure 5 [a binary tree representing someone’s choices] as the Tree of the Knowledge of Good and Evil.” (p. 189-190)

I’m liveblogging from the Taj Mahal

Saturday, December 22nd, 2007

No particular news to report — it’s about the same as it was 400 years ago, I guess. I just wanted to liveblog from the Taj Mahal, is all. (Jonathan Walgate is the one who suggested it.). Now I’ll go back to looking at it.

The most trivial theorem I’ve ever written up

Wednesday, May 2nd, 2007

Theorem: Suppose NP-complete problems are efficiently solvable by quantum computers. Then either the polynomial hierarchy collapses, or else BQP ⊄ AM (that is, quantum computations can’t be efficiently simulated by Arthur-Merlin protocols).

Proof: Suppose NP ⊆ BQP and BQP ⊆ AM. Then coNP ⊆ BQP ⊆ AM, and hence the polynomial hierarchy collapses to the second level by a result of Boppana, Håstad, and Zachos.

Note: If only we could delete the weasel phrase “or else BQP ⊄ AM” from my Most Trivial Theorem, we would’ve achieved a long-sought breakthrough in quantum computing theory. In particular, we would’ve shown that any fast quantum algorithm to solve NP-complete problems would imply an unlikely collapse of classical complexity classes. But while the weasel phrase is weaselly enough to make the Most Trivial Theorem a triviality, I don’t think it’s infinitely weaselly. The reason is my growing suspicion that BQP ⊆ AM in the unrelativized world.

Second Note: When I call this my “Most Trivial Theorem,” obviously I’m excluding homework exercises.

Physics for Doofuses: Understanding Electricity

Sunday, April 15th, 2007

Welcome to an occasional new Shtetl-Optimized series, where physicists get to amuse themselves by watching me struggle to understand the most basic concepts of their discipline. I’ll consider my post on black hole singularities to be retroactively part of this series.

Official motto: “Because if I talked about complexity, you wouldn’t understand it.”

Unofficial motto: “Because if I talked about climate change, I’d start another flamewar — and as much as I want to save civilization, I want even more for everyone to like me.”

Today’s topic is Understanding Electricity. First of all, what makes electricity confusing? Well, besides electricity’s evil twin magnetism (which we’ll get to another time), what makes it confusing is that there are six things to keep track of: charge, current, energy, power, voltage, and resistance, which are measured respectively in coulombs, amps, joules, watts, volts, and ohms. And I mean, sure you can memorize formulas for these things, but what are they, in terms of actual electrons flowing through a wire?

Alright, let’s take ’em one by one.

Charge is the q in kqq/r2. Twice as many electrons, twice as much charge. ‘Nuff said.

Current is charge per unit time. It’s how many electrons are flowing through a cross-section of the wire every second. If you’ve got 100 amps coming out, you can send 50 this way and 50 that way, or π this way and 100-π that way, etc.

Energy … Alright, even I know this one. Energy is what we fight wars to liberate. In our case, if you have a bunch of electrons going through a wire, then the energy scales like the number of electrons times the speed of the electrons squared.

Power is energy per unit time: how much energy does your appliance consume every second? Duh, that’s why a 60-watt light bulb is environmentally-friendlier than a 100-watt bulb.

Voltage is the first one I had trouble with back in freshman physics. It’s energy per charge, or power per current. Intuitively, voltage measures how much energy gets imparted to each individual electron. Thus, if you have a 110-volt hairdryer and you plug it into a 220-volt outlet, then the trouble is that the electrons have twice as much energy as the hairdryer expects. This is what transformers are for: to ramp voltages up and down.

Incidentally, the ability to transform voltages is related to why what comes out of your socket is alternating current (AC) instead of direct current (DC). AC, of course, is the kind where the electrons switch direction 60 times or so per second, while DC is the kind where they always flow in the same direction. For computers and other electronics, you clearly want DC, since logic gates are unidirectional. And indeed, the earliest power plants did transmit DC. In the 1890’s, Thomas Edison fought vigorously against the adoption of AC, going so far as to electrocute dogs, horses, and even an elephant using AC in order to “prove” that it was unsafe. (These demonstrations proved about as much as D-Wave’s quantum computer — since needless to say, one can also electrocute elephants using DC. To draw any conclusions a comparative study is needed.)

So why did AC win? Because it turns out that it’s not practical to transmit DC over distances of more than about a mile. The reason is this: the longer the wire, the more power gets lost along the way. On the other hand, the higher the voltage, the less power gets lost along the way. This means that if you want to send power over a long wire and have a reasonable amount of it reach its destination, then you want to transmit at high voltages. But high voltages are no good for household appliances, for safety and other reasons. So once the power gets close to its destination, you want to convert back down to lower voltages.

Now, the simplest way to convert high voltages to low ones was discovered by Michael Faraday, and relies on the principle of electromagnetic induction. This is the principle according to which a changing electric current creates a changing magnetic field, which can in turn be used to drive another current. (Damn, I knew we wouldn’t get far without bumping into electricity’s evil and confusing magnetwin.) And that gives us a simple way to convert one voltage to another — analogous to using a small, quickly-rotating gear to drive a big, slowly-rotating gear.

So to make a long story short: while in principle it’s possible to convert voltages with DC, it’s more practical to do it with AC. And if you don’t convert voltages, then you can only transmit power for about a mile — meaning that you’d have to build millions of tiny power plants, unless you only cared about urban centers like New York.

Resistance is the trickiest of the six concepts. Basically, resistance is the thing you need to cut in half, if you want to send twice as much current through a wire at the same voltage. If you have two appliances hooked up serially, the total resistance is the sum of the individual resistances: Rtot = R1 + R2. On the other hand, if you have two appliances hooked up in parallel, the reciprocal of the total resistance is the sum of the reciprocals of the individual resistances: 1/Rtot = 1/R1 + 1/R2. If you’re like me, you’ll immediately ask: why should resistance obey these identities? Or to put it differently, why should the thing that obeys one or both of these identities be resistance, defined as voltage divided by current?

Well, as it turns out, the identities don’t always hold. That they do in most cases of interest is just an empirical fact, called Ohm’s Law. I suspect that much confusion could be eliminated in freshman physics classes, were it made clear that there’s nothing obvious about this “Law”: a new physical assumption is being introduced. (Challenge for commenters: can you give me a handwaving argument for why Ohm’s Law should hold? The rule is that your argument has to be grounded in terms of what the actual electrons in a wire are doing.)

Here are some useful formulas that follow from the above discussion:

Power = Voltage2/Resistance = Current2 x Resistance = Voltage x Current
Voltage = Power/Current = Current x Resistance = √(Power x Resistance)
Resistance = Voltage/Current = Power/Current2 = Voltage2/Power
Current = Power/Voltage = Voltage/Resistance = √(Power/Resistance)

Understand? Really? Take the test!

Update (4/16): Chad Orzel answers my question about Ohm’s Law.