Welcome to an occasional new *Shtetl-Optimized* series, where physicists get to amuse themselves by watching me struggle to understand the most basic concepts of their discipline. I’ll consider my post on black hole singularities to be retroactively part of this series.

Official motto: *“Because if I talked about complexity, you wouldn’t understand it.”*

Unofficial motto: *“Because if I talked about climate change, I’d start another flamewar — and as much as I want to save civilization, I want even more for everyone to like me.”*

Today’s topic is **Understanding Electricity**. First of all, what makes electricity confusing? Well, besides electricity’s evil twin magnetism (which we’ll get to another time), what makes it confusing is that there are six things to keep track of: charge, current, energy, power, voltage, and resistance, which are measured respectively in coulombs, amps, joules, watts, volts, and ohms. And I mean, sure you can memorize formulas for these things, but what *are* they, in terms of actual electrons flowing through a wire?

Alright, let’s take ’em one by one.

**Charge** is the q in kqq/r^{2}. Twice as many electrons, twice as much charge. ‘Nuff said.

**Current** is charge per unit time. It’s how many electrons are flowing through a cross-section of the wire every second. If you’ve got 100 amps coming out, you can send 50 this way and 50 that way, or π this way and 100-π that way, etc.

**Energy** … Alright, even *I* know this one. Energy is what we fight wars to liberate. In our case, if you have a bunch of electrons going through a wire, then the energy scales like the number of electrons times the speed of the electrons squared.

**Power** is energy per unit time: how much energy does your appliance consume every second? Duh, that’s why a 60-watt light bulb is environmentally-friendlier than a 100-watt bulb.

**Voltage** is the first one I had trouble with back in freshman physics. It’s energy per charge, or power per current. Intuitively, voltage measures how much energy gets imparted to each individual electron. Thus, if you have a 110-volt hairdryer and you plug it into a 220-volt outlet, then the trouble is that the electrons have twice as much energy as the hairdryer expects. This is what *transformers* are for: to ramp voltages up and down.

Incidentally, the ability to transform voltages is related to why what comes out of your socket is *alternating current* (AC) instead of *direct current* (DC). AC, of course, is the kind where the electrons switch direction 60 times or so per second, while DC is the kind where they always flow in the same direction. For computers and other electronics, you clearly want DC, since logic gates are unidirectional. And indeed, the earliest power plants did transmit DC. In the 1890’s, Thomas Edison fought vigorously against the adoption of AC, going so far as to electrocute dogs, horses, and even an elephant using AC in order to “prove” that it was unsafe. (These demonstrations proved about as much as D-Wave’s quantum computer — since needless to say, one can *also* electrocute elephants using DC. To draw any conclusions a comparative study is needed.)

So why did AC win? Because it turns out that it’s not practical to transmit DC over distances of more than about a mile. The reason is this: the longer the wire, the more power gets lost along the way. On the other hand, the higher the voltage, the *less* power gets lost along the way. This means that if you want to send power over a long wire and have a reasonable amount of it reach its destination, then you want to transmit at high voltages. But high voltages are no good for household appliances, for safety and other reasons. So once the power gets close to its destination, you want to convert back down to lower voltages.

Now, the simplest way to convert high voltages to low ones was discovered by Michael Faraday, and relies on the principle of electromagnetic induction. This is the principle according to which a changing electric current creates a changing magnetic field, which can in turn be used to drive another current. (Damn, I knew we wouldn’t get far without bumping into electricity’s evil and confusing magnetwin.) And that gives us a simple way to convert one voltage to another — analogous to using a small, quickly-rotating gear to drive a big, slowly-rotating gear.

So to make a long story short: while *in principle* it’s possible to convert voltages with DC, it’s more practical to do it with AC. And if you don’t convert voltages, then you can only transmit power for about a mile — meaning that you’d have to build millions of tiny power plants, unless you only cared about urban centers like New York.

**Resistance** is the trickiest of the six concepts. Basically, resistance is the thing you need to cut in half, if you want to send twice as much current through a wire at the same voltage. If you have two appliances hooked up serially, the total resistance is the sum of the individual resistances: R_{tot} = R_{1} + R_{2}. On the other hand, if you have two appliances hooked up in parallel, the reciprocal of the total resistance is the sum of the reciprocals of the individual resistances: 1/R_{tot} = 1/R_{1} + 1/R_{2}. If you’re like me, you’ll immediately ask: *why* should resistance obey these identities? Or to put it differently, why should the thing that obeys one or both of these identities be *resistance*, defined as voltage divided by current?

Well, as it turns out, the identities *don’t* always hold. That they do in most cases of interest is just an empirical fact, called Ohm’s Law. I suspect that much confusion could be eliminated in freshman physics classes, were it made clear that there’s nothing obvious about this “Law”: a new physical assumption is being introduced. (Challenge for commenters: can you give me a handwaving argument for *why* Ohm’s Law should hold? The rule is that your argument has to be grounded in terms of what the actual electrons in a wire are doing.)

Here are some useful formulas that follow from the above discussion:

Power = Voltage^{2}/Resistance = Current^{2} x Resistance = Voltage x Current

Voltage = Power/Current = Current x Resistance = √(Power x Resistance)

Resistance = Voltage/Current = Power/Current^{2} = Voltage^{2}/Power

Current = Power/Voltage = Voltage/Resistance = √(Power/Resistance)

Understand? Really? Take the test!

**Update (4/16):** Chad Orzel answers my question about Ohm’s Law.

Comment #1 April 15th, 2007 at 9:22 am

I think there’s a typo. “

Thus, if you have a 110-volt hairdryer and you plug it into a 220-volt outlet, then the trouble is that the electrons have as much energy as the hairdryer expects.“Comment #2 April 15th, 2007 at 9:38 am

“Current is charge per unit time. It’s how many electrons are coming out of a wire every second.”

I am glad the electrons are not coming out of the wires in our home …

Comment #3 April 15th, 2007 at 9:51 am

Challenge for commenters: can you give me a handwaving argument for why Ohm’s Law should hold? The rule is that your argument has to be grounded in terms of what the actual electrons in a wire are doing.I have a lecture about this that I’m giving to my E&M class tomorrow, but it’s too long for this comments section. It does make a decent blog post, though:

Basic Concepts: Ohm’s Law

Do I win anything?

Comment #4 April 15th, 2007 at 10:08 am

Lev R. and Chad: Thanks!

And sure, Chad: you win the right to ask me any question and have me answer it on this blog. (Note, however, that Lev R. won the same prize almost a year ago, and I

stillhaven’t gotten around to answering his question…)Comment #5 April 15th, 2007 at 10:13 am

Wolfgang: I don’t think the English language supplies the preposition I want! I’d originally written that current is how many electrons are flowing “through” the wire each second — but that’s not right, since it suggests the current should scale with the length of the wire. I just changed it to “flowing through a cross-section of the wire.”

Comment #6 April 15th, 2007 at 10:24 am

I agree that the way Ohm’s law is teached can cause confusion. Another thing that contributed to my confusion: normal circuit theory assume a stationary flow without my teacher explicitly saying it. Trying to get a microscopic picture of what happens during various electrical phenomena can be pretty confusing if you don’t know you are allowed to assume a stationary flow.

Comment #7 April 15th, 2007 at 10:31 am

I was expecting you to speak a little about superconductivity…

Comment #8 April 15th, 2007 at 11:23 am

I won’t give my hand-waving argument, because I didn’t come up with it on my own, but anyone who want’s a good brief explanation should check out pages 289 – 290 of the third edition of Griffiths’ “Introduction to Electrodynamics”, which I’m sure a lot of you have on your shelf.

Comment #9 April 15th, 2007 at 12:55 pm

On one point above, the energy we care about in this sort of discussion is not the kinetic energy. The energy that matters is the potential energy, which is just given by the voltage. A particle with charge q sitting at voltage V has potential energy qV.

Since you mention you had a problem with voltage:

When people talk about a voltage what they really mean is a voltage difference between two ends of a wire, for example the live and neutral in your socket is 220V.

The potential in the wire isn’t actually at 220V at any point. Rather, the potential falls continuously through 220V and electrons move down that potential. As this energy is given up it can be used to run devices, like hairdriers.

Just as particles with mass experience a gravitational potential energy gradient as a gravitational force, so electrically charged particles experience a voltage gradient as an electric force field.

(You will know have a picture of what’s going on in terms of electrons when you understand why the t_ave between collisions in Chad’s post doesn’t depend on drift velocity. Then you can worry about how all this is actually quantum mechanical…)

Comment #10 April 15th, 2007 at 12:57 pm

This test better predicts academic success.

Comment #11 April 15th, 2007 at 1:24 pm

The explanation was about Electrical Engineering. Electronics Engineers are rolling on the floor, laughing.

Charge is also carried by holes, in p-type semiconductors. Charge is also carried by Cooper pairs, in superconductors.

All those nice linear equations go in the trash, and are replaced by models with much more, ummmmm, complexity.

Quantum Hall Effect. Try explaining that with Ohm’s Law. Josephson Junctions. And the charges are going through an insulator when there’s no voltage … why?

At least in the 19th century, EE forced people to be really good at setting up and solving problems that most graduate school textbooks skip over today.

Fermi Surfaces. Virtual particles. Infinite temperatures. Bose-Einstein Condensates.

And you know what? The central scandal in electrodynamics is that Physics has STILL not produced an adequate model of a single electron. Really. Feynman told me so, in tones of anguish. Because he knew that even QED (which requires electrons to be dimensionless structureless points) breaks down at about 10^-15 cm. And the LHC has a chance of revealing structure inside electrons.

Comment #12 April 15th, 2007 at 2:29 pm

Scott says:

Challenge for commentators: can you give me a handwaving argument for why Ohm’s Law should hold?SCOTT, That is IMHO a d*mn tough challenge. The best textbook derivation of Ohm’s law that I know is (IMHO) the one in the Landau and Lifshitz series … of course for pretty much

anyphysical phenomenon, the best explanation is in the L&L series.But most L&L explanations are not hand-waving.

So here is a “Mike and Ike” version. Electrons are mobile charges that are immersed in a decoherent thermal bath. Replace the thermal bath with an indistinguishable observation + linear control process that damps the electron motion. By Choi’s theorem we can always so this; the proof is left as an exercise!

First bonus insight: the gain of the controller determines the temperature of the bath:

(optimal control gain) ↔ (zero temperature)

Informatically speaking, this is a very pleasant and useful way to think about temperature!

Now impose an electric field. The electron will drift at increasing speed until (after a very short time on human scales) the drift force matches the controller force. At subsequent times the drift velocity is proportional to the E-field … this is Ohms Law.

QED-BHW (as was to be demonstrated by hand-waving!)

Second bonus insight: the continuous observation process spatially localizes the electron wave function — this is the underlying QIT reason why semiclassical physics works so well.

Speaking more generally, there is IMHO a great pedagogic advantage to be gained by: (1) always teaching quantum measurement before quantum dynamics, and (2) always replacing noisy reservoirs by Choi-equivalent observation and control processes.

Indeed, young quantum information theorists should never learn (or teach) quantum physics/engineering any other way. Right?

Comment #13 April 15th, 2007 at 3:30 pm

why should resistance obey these identities? Or to put it differently, why should the thing that obeys one or both of these identities be resistance, defined as voltage divided by current?There are two questions here. First, why is current proportional to voltage? Chad’s post explains this nicely (JK too), and a handwaving claim involving electrons is that electrons are moving at terminal velocity through the wire: acceleration is proportional to voltage (you can get this from the units: voltage*charge=energy=acceleration*distance), while drag is proportional to velocity, just like it would be for friction. At steady-state, drag=acceleration, so velocity is proportional to voltage. Note that at high voltages, what breaks down is the “drag proportional to velocity” part – or rather, the proportionality constant changes when e.g. the voltage is high enough to ionize air and we get sparks.

So now we have V=IR. Why does resistance add in series, but its reciprocal adds in parallel? Well, this follows from looking at what happens to voltage and current in series and parallel. Voltage across a wire really means a voltage difference, and this adds for two wires in series (V=V1+V2) but is equal for wires in parallel (V=V1=V2). [A computer scientist might want to think about shortest path at this point.] For current, think of, um, max-flow. It adds for parallel wires (I=I1+I2) but is equal for wires in series ((I=I1=I2). The resistance identities follow directly.

Intuitively, fatter pipes are easier to get through and longer pipes are harder to get through.

On a side note, have you seen the relativistic treatment of E&M? It cuts the number of equations and units down dramatically. On the other hand, good luck trying to understand something like a simple electromagnet with it.

Comment #14 April 15th, 2007 at 5:54 pm

(1) Only a theorist would dismiss the significantly different risk between AC and DC as an irrelevancy. There is a reason experimenters don’t want us in their labs or, in the case of Pauli, even in the same town. Essay assignment for your class: compare and contrast the effects on the human body of 10 mA of 60Hz AC with 10 mA of DC.

(2) I’d unask the question. R = V/I is the definition of resistance. R is basically the linearization (often local, since it depends on temperature, sometimes in interesting ways) of V(I,etc). That is also how you measure it. [c.f. SI standards for V and I.] The correct question is then “When is R constant”, telling us when we can then apply all those simple relationships.

Comment #15 April 15th, 2007 at 5:57 pm

[…] April 15th, 2007 Physics for Doofuses: Understanding Electricity First of all, what makes electricity confusing? Well, besides electricity’s evil twin magnetism (which we’ll get to another time), what makes it confusing is that there are six things to keep track of: charge, current, energy, power, voltage, and resistance, which are measured respectively in coulombs, amps, joules, watts, volts, and ohms. […]

Comment #16 April 15th, 2007 at 6:15 pm

The intuition I always have used for resistance is picturing an electron as a runner of some sort trying to get through obstacles. If you like, picture an American football player trying to run through blockers (or whatever sport you like in which blocking is legal). In fact, I believe the physical basis for resistance really is something like electrons getting their movement blocked more effectively by certain materials than others, but I am no physicist, so I may be making that up. Each runner is a 1 volt electron (voltage is energy per electron, so this means at 1 volt, each electron has 1 joule of energy), and each blocker is a 1 Ohm resistor. The number of runners I get to the other side is my current (I’m the coach, I suppose), but I can only try to send a runner through once the previous one has made it. Let’s say it takes 1 second for 1 runner to make it through 1 blocker.

So if I send a runner through a blocker every second, then I achieve a current of 1 Amp. If I increase the voltage (energy of each runner), then because of the extra energy, they can run faster, so they make it to the other side quicker. So tripling the voltage means each runner makes it to the other side in 1/3 the time, so I get 3 times as many runners through in 1 second, tripling my current to 3 Amps.

What if I line up 3 blockers in series? Then the runner is slowed down 2 additional times by the extra 2 blockers, so it takes 3 times as long to get to the other side, and the average number of runners getting through each second (the current) is cut to 1/3 Amps. If I line up 3 blockers in parallel, then I can still only get 1 runner through 1 blocker in 1 second, but I can send 3 runners at once, and each blocker can only block a single runner. Essentially, this makes the playing field wider without making any one path from one side of the field to the other any more difficult, so I can use the extra width to send extra runners. In the same time, I can get 3 times as many runners through; so my current is tripled to 3 Amps.

For a slightly more complex example, what if I have a 1 Ohm resistor in parallel with a 2 Ohm resistor? Then this is simply like having two blockers side-by-side (the 1 Ohm resistor, and 1 half of the 2 Ohm resistor), but one of them has an additional blocker behind him (i.e., the other half of the 2 Ohm resistor). So in 2 seconds I can get 2 runners through the lone blocker (the 1 Ohm resistor), but only 1 runner through the double blockers (the 2 Ohm resistor), meaning I squeeze 3 runners through in 2 seconds, for an average of 1.5 runners/second; i.e., 1.5 Amps, which I expect from Ohm’s law because 1.5 Amps and 1 volt means 2/3 Ohm resistance, which satisfies the parallel resistance equation 1/(2/3) = 1/1 + 1/2. Alternately, a 2 Ohm resistor can be considered a blocker that is twice as good at the job as a 1 Ohm resistor, who can therefore stop a runner for twice as long.

As an aside, I believe this is the only mathematical concept that I internalized through a sports analogy.

Comment #17 April 15th, 2007 at 8:43 pm

> Because it turns out that it’s not practical to transmit DC > over distances of more than about a mile.

Not quite:

http://www.gov.mb.ca/est/energy/power/generating.html

“Manitoba Hydro’s HVDC transmission system consists of two identical steel tower lines, Bipole 1 and Bipole 2. From Gillam they follow a 900-km route through the Interlake areas to Rosser, located 26 km from Winnipeg on the northwest side. The other main components of the system are three converter stations: Radisson and Henday located in the north, and Dorsey at Rosser in the south.”

http://www.hvdc.ca/

http://www.abb.com/hvdc

Comment #18 April 15th, 2007 at 9:16 pm

CCPhysicist: In your eagerness to ridicule theorists, you seem to have ignored what I actually wrote. I

didn’tdismiss the different risks of AC and DC as irrelevant: I said that Edison’s public demonstrations (which involved only AC and not DC) couldn’t have proved anything about that difference, just like a “practical quantum computer demonstration” that doesn’t compare the running time to that of the best classical algorithm.Also, the point that one can

defineresistance as V/I, and then frame the question as why that quantity has the properties it has, is one that I made explicitly:Or to put it differently, why should the thing that obeys one or both of these identities be

resistance, defined as voltage divided by current?Comment #19 April 15th, 2007 at 11:11 pm

For the CS-minded, there is a nice connection between the properties of random walks in graphs and effective resistance/Ohm’s Law that may help with the intuition. See the monograph by Doyle and Snell and the paper by Chandra, Raghavan, Ruzzo, Tiwari, & Tompa.

Comment #20 April 15th, 2007 at 11:49 pm

“When people talk about a voltage what they really mean is a voltage difference between two ends of a wire, for example the live and neutral in your socket is 220V.”

Are you sure about that, Scott? If I am not mistaken, you need to phases to get 220V; live and neutral will give you only 127V. Something to do with the way electricity is generated (3 phases, 120 degrees apart).

Comment #21 April 16th, 2007 at 12:08 am

Worrying about how many things there are to keep track of is a common beginner’s point of view in many fields of study before general principles are learned. But that’s okay; everyone’s got to start somewhere, I suppose. In the case of electricity, Maxwell’s unification can help make these “things” more transparent. For example, Ohm’s law is just the postulate J = sigma*E, which is a reasonable approximation for most materials. It would take a very strange material to contain charges which move in a direction different than the applied electric field. In words, the “law” postulates that the current density is a scalar multiple of the electric field. If you understand the law this way, you can see why it can’t be fundamental and how it could fail. For example, the relationship could be nonlinear, or even linear via a matrix rather than a scalar. That’s my favorite way of understanding the content of Ohm’s law, and it’s the way I taught it last semester in my sophomore physics class.

Comment #22 April 16th, 2007 at 12:16 am

Are you sure about that, Scott?First of all, I didn’t even say it — a commenter did!

Look, I know my definition of voltage (in terms of the energy imparted to individual electrons) was oversimplified. But talking about “potential differences” would’ve violated my cardinal rule, that everything has to be defined in terms of what the actual electrons are doing.

Regarding your comment about phases, I have no idea.

Comment #23 April 16th, 2007 at 1:36 am

But talking about “potential differences” would’ve violated my cardinal rule, that everything has to be defined in terms of what the actual electrons are doing.A potential difference is how much more energy an electron has from being in one place rather than another.

Voltage:charge :: height:weight (roughly speaking)

So that’s an explanation in terms of electrons.

But to explain where it comes from you really need fields (photons) also.

Comment #24 April 16th, 2007 at 2:30 am

“Let’s say it takes 1 second for 1 runner to make it through 1 blocker.”

A quite nice analogy for picturing scattering cross-sections!

Andrew L, seems like presenting it that way, at least from the perspective of someone who deals with charges and magnetic fields for a living, has the problem of implying that conductivity is a scalar, when in fact it’s generally a tensor whose components allow all sorts of tricksiness but also allow really interesting stuff, no? Hey, it strikes me though that pedagogically, you could jump from one to the other in a couple of lines, if you can get students to grasp what an identity matrix is. Then the relationship can be shown as

J = sigma*E = sigma*identity matrix*E. And then generelize from linearly proportional conductivities to more realistic conductivity tensors, say, such as for Hall or Pedersen currents and associated effects.

@ Scott, and Chad, and other commenters:

This is a really nice couple of threads, pedagogically speaking. Especially seeing non-plasma people’s handling. Thanks, all.

Comment #25 April 16th, 2007 at 2:34 am

BTW, that wasn’t supposed to sound like a put down or anything. I literally pictured that jump about three seconds before typing it, but there’s not really a way to type an “AHA!” moment into a comment box.

Comment #26 April 16th, 2007 at 2:58 am

“If I am not mistaken, you need to phases to get 220V; live and neutral will give you only 127V. Something to do with the way electricity is generated (3 phases, 120 degrees apart).”

Oops – transatlantic misunderstanding. Here in the UK we have 240V on single phase, so I assumed that the US 220V standard was the equivalent. Turns out it’s not. You’re right that 3 phases is more complex.

As for the potential not being about “what actual electrons are doing” I’m not sure what Scott means. Is a description of parabolic flight more in terms of “what the actual ball is doing” if we refrain from talking about potential energy (and “oversimplify” by taking the energy as .5mv^2!)?

Do you prefer to talk about forces (electric fields in the electron case)?

Comment #27 April 16th, 2007 at 3:03 am

er, North American standard.

see http://en.wikipedia.org/wiki/List_of_countries_with_mains_power_plugs,_voltages_and_frequencies

Comment #28 April 16th, 2007 at 4:02 am

Does Chad Orzel’s explanation imply that 1/R = 1/R1 + 1/R2 for parallel resistors only hold if the conductivity and lengths of R1 and R2 are nearly equal ?

Comment #29 April 16th, 2007 at 5:19 am

No, in North America, both 120V supply and 240V supply are single phase. They are provided in a three wire system, with two line voltages, and a neutral. The line to neutral voltage on each line is 120V RMS, but the two lines are 180 degrees out of phase, so the line-line voltage is 240V RMS.

The 3 phase standard is usually referred to as 208V. Each of the three hot wires in this configuration is 120V with respect to ground, but they are 120 deg out of phase, and so, between any two, there is a 208V RMS line-line voltage.

Comment #30 April 16th, 2007 at 5:20 am

Ohm’s Law is a paradigmatic example of what Wigner in 1960 famously called “the unreasonable efficacy of mathematics in the physics sciences.”

Or as we learned to understand it in the 1990s “the algorithmic compressibility of the physical sciences.”

Or as we are learning to understand it in the present decade “the generic classical simulatiblity of physical systems.”

We

dounderstand the world much better now than Wigner’s generation did, don’t we?And quantum information theory

isa major locus of that improved understanding, isn’t it?Comment #31 April 16th, 2007 at 7:10 am

It was exactly why freshman courses made voltage hard that was the reason I continued to have a special interest in physics. An alluring mystery! 😉

Nitpick: Close, but not quite. The international SI units are coulomb, ampere, joule, watt, volt and ohm. (The plural is expressed in the value, not the unit.) I can even find that in many text books.

No fair! Linear approximation and charge scattering is already taken. And superconductors are another ball game.

I liked the heat bath model though.

No, it is potential energy (difference) so it is how much energy can be imparted by current. (Since obviously we can have zero current while still having a potential.)

Maybe this is why you say that “the electrons have twice as much energy as the hairdryer expects”. Not going into your specific load and how it behaves when the potential is ramped, but in an idealized resistor load you get twice as many charge carriers (double the current). So here you have four times the energy (U^2/R, ie I^2*R here) in the current, not twice the energy per electron.

Well, it didn’t. DC is making a comeback for high power transmission since the transmission losses are lower (less inductive losses) and modern technology can handle voltage conversions efficiently. IIRC in sea cables they have even tried to omit the then unnecessary return line to cut the installation cost.

Even assuming you mean charge carriers (to explain semiconductors and superconductors), I think you will have a hard time explaining displacement (or polarization) currents in those terms consistent with the basic definitions. If you place your current defining cross-sectional area between a capacitor plate, what are your charge carriers doing there? 😉

Not at all. You can continue to cut and replace serial branches until you have parallel resistors of equal length in your model.

But practically, for very low and high resistances the usual concepts of resistance and/or current breaks down. Superconductors and isolators creep currents must be modeled separately.

Comment #32 April 16th, 2007 at 7:16 am

what are they, in terms of actual electrons flowing through a wire?That’s a great question to ask, but I think you confused people by not stressing it enough, by not explaining that this is the point of the exercise.

It’s like (the opposite of) how people never explain that Newton’s law F=ma is the definition of force, not a physical law. The point is that the formulas simplify if you recast them in terms of force. I think this leads to the belief that force (and voltage) are more fundamental than the more easily observable quantities, but this no one comes out and says that we’re switching priorities. Scott, you said you were switching back, but it just isn’t enough because people don’t realize how they switched points of view in the first place.

I have more sympathy with the way E&M is taught, since people come to it already having been taught that force and potential are important. If that step had been done well in mechanics, it might be OK that it’s not done in E&M.

Comment #33 April 16th, 2007 at 7:27 am

Hmm. I came up with a problem (for me) in the explanation for parallel resistance. The formula is correct though, and IIRC one can puzzle it out so the simple model (equal lengths) works. But I’m not sure how right now.

Also, wanted to add under that point that the simple current concept breaks down for small currents as well. (There are so many forms of currents. Panofsky & Phillips describes four different basic ones in their “Classical electricity and magnetism”. And there are several more mentioned in the thread. Plasmas behavior is most fun, btw.

Comment #34 April 16th, 2007 at 7:27 am

Maybe you can help this doofus out a little more; I thought electrons moved at the speed of light! Actually, I thought that, like incompressible water in a hose, pushing electrons at one end caused electrons to flow out the other end virtually immediately, so that although individual electrons might move at speeds within normal human comprehension, “electricity” is very fast (it doesn’t take long for a light-bulb to come on “after” you flip the switch).

How fast do electrons move? How do you make them move faster or slower? Is resistance anything like an electronic equivalent to “uphill” or “downhill”? Is there anything that

Electricity for Doofusescan tell me about how the speed of light ties into all this, or does that require the evil magnetwin?Comment #35 April 16th, 2007 at 7:54 am

The actual electron velocity is pretty small, on the order of m/s. Getting this is a nasty condensed matter problem.

Field changes propagate on the order of 0.5 c to 0.9 c, in typical copper wire. You can get this by looking at the wave equation at the interface between copper wire and air and plugging in the right permeabilities and permittivities.

Comment #36 April 16th, 2007 at 7:54 am

Much wrangling about SI units within our QSE Group—is the proper capitalization “zeptonewton” or “zeptoNewton”?—has been resolved by reference to the the NIST’s

Guide for the Use of the International System of Units (SI).The (free) PDF version of this document has a useful place on every scientist’s hard-drive, in our opinion. Reading through it is a good way to learn how units work in the real world, both technically and socially.

For example, “volt” is not an SI base unit (Section 4.1): it is a derived unit, with units of “W/A”, meaning watts of power per ampere of current.

And notice that “watt”, “volt”, and “ampere” are

notcapitalized in the above sentence. If anyone–even a journal editor—questions this convention, you may (smugly!) refer them to Section 9.1.We find that even journal editors will (sometimes) bow to the authority of the NIST Guide!

Comment #37 April 16th, 2007 at 8:08 am

Ouch! Now I have to make a third comment in a row, because this is interesting.

If you look on circuits in the simple Kirchoff’s picture, you can treat currents and voltages (or resistances and conductances) as duals. Impedance and admittance matrices are actually good tools even in non-linear situations, though admittance matrices are most useful. (From deep reasons I think, ask the mathematicians who seems to use a similar tool.)

But when you start to look at active devices, you must pick the right tool. It doesn’t make much sense to look at the function of voltage controlled devices (FET’s for example) in the same way as current controlled devices (bipolars). In fact, I’m not sure how to do that.

Actually, in most circuits it is readily apparent that different types of transistors must be treated dissimilarly internally as different passive components are used to set up the various working points from different principles.

But going back to EM basics, isn’t the fields considered to be more basic than the charges in QFT’s? At least it was my impression, not being conversant with it. If so, in this case the elegance of the classical model lines up with the fundamentals of the more advanced physics.

Comment #38 April 16th, 2007 at 8:44 am

Torbjörn Larsson says:

But when you start to look at active devices, you must pick the right tool.That is a deep observation. Having just come from a visit to Intel, I can testify that “taping” a new VLSI processor mask—the ultimate nonlinear active device—is a 400 person-year endeavor.

Intel expects that these highly complex processor designs will work pretty much the first time. They rely utterly on simulation tools to ensure that this happens. The creation and improvement of the required modeling simulation tools is a major technology driver for Intel and all similar companies.

System engineers regard the ongoing (wholly empirical) exponentiation of simulation technology as comprising a “Third Moore’s Law”; the first Moore’s Law being exponentiating improvements in computation hardware (VLSI), and the second being exponentiating increases in informatic databases (Google, Genome Project, Digital Sky Survey).

Obviously, these three Moore’s Laws drive one another, with QIT emerging as providing the foundational tools for all three.

That’s my view anyway … just in case people wonder why an engineer takes such an interest in the foundations of QIT, and why I am so interested to derive even such fundamental equations as Ohm’s Law directly from QIT.

Comment #39 April 16th, 2007 at 10:51 am

Maybe you can help this doofus out a little more; I thought electrons moved at the speed of light!Tip from a fellow doofus: only massless particles can move at the speed of light. Electrons have mass, so they have to go slower.

Comment #40 April 16th, 2007 at 11:48 am

Scott, it’s very commendable that you are

interested in improving your grasp of physics.

I think a branch of physics that you

should learn more about is Renormalization Group.

It has quite a lot in common with complexity

theory. I think it would deepen your understanding

of complexity theory if you were better acquainted with

renormalization group techniques.

( I’ve told you this before, I know, but

good advice is worth repeating)

Comment #41 April 16th, 2007 at 12:34 pm

Great thread!

My bad on the first point, but I did think you dismissed the different risks of AC and DC rather too easily. We theorists are dangerous that way.

But my second point was that the question should be *when* is R constant, not *why* is R constant. Unless my student is an EE designing power grids, just about everything interesting concerns when R is not constant (or when V -vs- I is not linear).

I’d also make the comment about long distance transmission lines stronger than what was said here. Popularity has increased because the conversion can be done more efficiently today (with non ohmic devices!), but DC interconnects have been used for a long time, for two main reasons. (1) It is not physically possible to phase lock the generators within a grid if the grid is larger than a fraction of a wavelength. IIRC, Serway makes this point about a line that runs from OR to LA. (2) You can calculate the length that makes it into a 60 Hz antenna, increasing radiation losses (the point someone made above). Helps explain why Texas is its own grid.

Comment #42 April 16th, 2007 at 6:30 pm

If you define resistance as V/I (and don’t claim it’s constant), I think the series resistance rule is an algebraic consequence, but the parallel rule requires the additional assumption of conservation of energy. Also, you need to know that charge doesn’t accumulate. I’m not sure whether to call that an additional assumption; it’s at least implicit in any discussion of current.

For series resistors, current flowing through the middle is the same as current flowing through each one. The energy acquired by an electron adds as it flows through the two systems. That is, voltage adds across series resistors, so effective resistance also adds.

From conservation of energy (and the usual assumption that you can force an electron anywhere you want), it follows that there is a notion of potential, that the energy gained by an electron going between two points doesn’t depend on the route it took. Thus, parallel resistors have the same voltage. Their currents add and the identity follows.

Comment #43 April 17th, 2007 at 1:22 am

[…] April 17th, 2007 an intuitive lesson about charge, current, energy, power, voltage, and resistance.Link Posted by alextorex Filed in Uncategorized […]

Comment #44 April 17th, 2007 at 4:54 am

“First of all, I didn’t even say it — a commenter did! :-)”

Where did I take that out?

Anyway, understanding the phases in terms of what the electrons are doing is quite simple (at least in a simplified way)

There are two really nice figures at http://en.wikipedia.org/wiki/Three-phase_electric_power. The first shows the flux of electrons on the three “pipes” connected to the generator, and even shows the negative acceleration of electrons before they turn (the reason why the change in the current is always delayed wrt the change of the potential).

The second shows the tension growing/shrinking on each pipe as vectors phased out 120 degrees.

Comment #45 April 17th, 2007 at 7:20 am

Just to articulate three points that many people appreciate implicitly, but which are not often stated explicitly: (1) there are many valid paths to teaching, understanding, and applying technical subjects like electromagnetism and quantum information theory, (2) the path chosen strongly conditions the nature of the community that embraces these subjects, (3) thus, the choice of teaching method is itself an exercise in optimization — what kind of community do we seek to create?

Well … what kind of community

dowe seek to create?IMHO, this question is just as interesting, and just as fundamental, and offers similarly many creative opportunities, as the theorems and equations of mathematics and science themselves.

Approaching pedagogy from any other point of view simply does not make much sense to me. Teaching is far more than the complement of learning, isn’t it?

As a specific recommendation, IAS Professor Jonathan Israel’s encyclopedic histories of the Enlightenment discuss this topic in greater detail and depth, and with enormously greater reference to the history of mathematics, science, and technology, than any other source I know.

Prof. Israel’s texts

Radical EnlightenmentandEnlightenment Contested, encompassing the years 1650-1752 (Spinoza … Newton … Liebniz … Hooke!) are particularly recommended, Further volumes are in the works—it is to be hoped that these volumes will eventually extend to the present time.The opportunities of our generation, I will suggest, are not less than those of previous generations. Which is fun!

Realizing these opportunities comprises also a serious responsibility (as it always has) that for the very first time, in our century, is planetary-scale.

Which is even

morefun! Because it’s always more fun, to play in the big leagues.Comment #46 April 17th, 2007 at 11:13 pm

rrtucci: I’ve been curious to learn RG recently too, but unsure where to start. Any tips on what to read?

Comment #47 April 18th, 2007 at 4:55 am

Leggett’s recent

Quantum Liquids: Bose Condensation and Cooper Pairing in Condensed-Matter Systemsis IMHO an clear and (reasonably) concise introduction to modern ways of thinking about condensed matter.It nicely complement’s Zee’s

Quantum Field Theory in a Nutshell, which is more old-school.Neither textbook talks much about noise, measurement, and quantum information theory, which (to my mind) means that a

trulymodern textbook has yet to be written!Both books are written with passion for the subject, and are a lot of fun to read.

Comment #48 April 18th, 2007 at 5:44 am

Regarding the “renomalisation group”. One should be very careful with this method. It does work in a small number of cases, e.g. in percolation where there are no correlations, but for the standard application in statistical physics it is actually known that it is fundamentally flawed and does not work. This is something that it seems people in statistical physics usually don’t know, or happily ignore when they are told about it.

You can get a good overview of this, and the RG-method itself, in

http://arxiv.org/abs/hep-lat/9210032

After this paper numerous others appeared which demonstrated even worse RG-patologies.

Comment #49 April 18th, 2007 at 7:37 am

I confirm what Klas M. said about the general lack of RG rigor in condensed matter applications. Still it is a cool concept that in practice is reasonably robust in its predictions.

Comment #50 April 18th, 2007 at 7:51 am

John: It is not just a lack of rigour that is the problem here. What the cited paper proves mathematically is that the renormalisation tranformations are serioulsy missbehaved; some of them only needs to be composed twice in order to not output a Hamiltonian at all.

RG gives correct results for percolation and the 1D Ising model. However, for just about all non-trivial condensed matter applications there is no correct answer to compare with, and one can not actually perform the RG transformations exactly, so whether the RG predictions are correct or not is a question which is much more open than the physics literature makes one believe.

Comment #51 April 18th, 2007 at 8:25 am

Klas said “…RG…fundamentally flawed and does not work.”

I remind you that Dirac delta functions, Dirac notation, Feynman path integrals, asymptotic series, and dozen other concepts… were, at their inception, vituperated by some mathematicians. But they have all been ultimately redeemed.

The fact is that physicists find RG very useful and intuitive, and they know how to apply it without getting themselves into trouble. (How do they avoid trouble? Whereas mathematicians use rigorous proof as a sanity check for their ideas, physicists use comparison with experiments)

Klas said “it does work in a small number of cases, e.g. in percolation where there are no correlations,” RG is FUNDAMENTAL to understanding Quantum Electrodynamics and the Standard Model, both of which are extremely well tested. RG is also fundamental in statistical mechanics

of phase transitions. It deals with highly correlated systems near a critical point, so to say that it works only for problems with no correlations is incorrect.

Nathaniel said “I’ve been curious to learn RG recently too,

but unsure where to start. Any tips on what to read?”

I originally learned RG from high energy physics

books. Almost any introductory book on quantum field

theory contains a chapter on RG. I also found

the book by Nigel Goldenfeld “Lectures on Phase Transitions

and the RG” very nice.

I’m highly addicted to wikipedia, so I just looked up RG there: sure enough, wikipedia has a nice article about the subject:

http://en.wikipedia.org/wiki/Renormalization_group

Comment #52 April 18th, 2007 at 9:20 am

Klas, I definitely agree. AFAICT, the most rigorous applications of the RG are in gauge field theories, where they are used to “resum” perturbative expansions. In field theory, the requirement that the cut-off of the infinities of the theory

notbe a parameter of the theory’s physical predictions provides exactly the rigorous mathematical structure that the RG needs to work.It is certainly not clear (to me) whether this beautiful mathematical structure represents a fundamental law of nature, or is simply a confession of ignorance!

Comment #53 April 18th, 2007 at 10:22 am

Ohm’s law should not, and does not, hold in general.

In the limit where the effective collision frequency (where the result of many small collisions cause phase randomization) is dominated by the dependence on the thermal velocities of electrons (most low voltage high density things at room temperature, like conductors, but not more complicated things!), the integral of the collision operator over velocity space scales linearly with the product of the number density of particles and the relative velocity of the electrons versus ions. (Ohm’s Law).

Once we have moved from this regime (one example is in driving current in Tokamaks) Ohm’s law becomes, more and more, an approximation only.

To get decent results in, say, metals, people make a free electron gas assumption and then stumble — the real issue is in the validity of the condensed matter description, and it is complicated and not amenable to sweeping hand-wavy arguments.

Comment #54 April 18th, 2007 at 10:50 am

Oh right, I should also clarify.

The integral over momentum space times the effective collision frequency is exactly the rate of change of momentum in the system, since, for every effective collision, we randomize phase in momentum space (and so get zero average momentum).

The derivative of momentum is a force — this is balanced with the negative space derivative of voltage — also a force. This force balance specifies an average velocity, which, finally, gives you your current.

Comment #55 April 18th, 2007 at 11:28 am

Klas said “…RG…fundamentally flawed and does not work.” I remind you that Dirac delta functions, Dirac notation, Feynman path integrals, asymptotic series, and dozen other concepts… were, at their inception, vituperated by some mathematicians. But they have all been ultimately redeemed.

The fact is that physicists find RG very useful and intuitive, and they know how to apply it without getting themselves into trouble. (How do they avoid trouble? Whereas mathematicians use rigorous proof as a sanity check for their ideas, physicists use comparison with experiments)

Klas said “it does work in a small number of cases, e.g. in percolation where there are no correlations,” RG is FUNDAMENTAL to understanding Quantum Electrodynamics and the Standard Model, both of which are extremely well tested. RG is also fundamental in statistical mechanics of phase transitions. It deals with highly correlated systems near a critical point, so to say that it works only for problems with no correlations is incorrect.

Nathaniel said “I’ve been curious to learn RG recently too, but unsure where to start. Any tips on what to read?” I originally learned RG from high energy physics books. Almost any introductory book on quantum field theory contains a chapter on RG. I also found the book by Nigel Goldenfeld “Lectures on Phase Transitions and the RG” very nice.

I’m highly addicted to wikipedia, so I just looked up RG there: sure enough, wikipedia has a nice article about the subject:

http://en.wikipedia.org/wiki/Renormalization_group

Comment #56 April 18th, 2007 at 12:59 pm

rrtucci: What I refer to is the RG method as currently used. If you take the time to look at the paper I cite you will find that many of the RG transformation used in statistical mechanics do not work, and that several of the authors are physicists.

I am also only talking about RG in connection with statistical mechanics type problems, I am not all an expert in QED and field theories. It is quite possible that RG works fine for field theory but fails in other settings. There are after all not many methods which works universally.

The only way to test RG predictions for even simple phase transition models today is to compare with coputer simulations. Since we have no rigorous control over the effects of the finite size of such system we have no real idea about how well they actually describe the behaviour of infinte system. One example of this is the model called bootstrap percolation. For this model a critical probability had beeen estimated by simulation on what was considered huge systems, side 28000, and the critical proability had been estimated with claimed accuracies of several decimals. Then a few years ago

http://arxiv.org/abs/math/0206132

Alexander Holroyd proved that the correct probability is actually about twice as large as the simulation papers had claimed.

Comment #57 April 18th, 2007 at 9:01 pm

[…] People always post interesting links in the comments to Scott Aaronson’s weblog. For example, the other day Paul Beame posted two links that explain the connections between random walks on graphs and electrical networks. One is a complete book on the subject by Doyle and Snell. The other is an article by Chandra, Raghavan, Ruzzo, Smolensky, and Tiwari that further develops the theory. […]

Comment #58 April 19th, 2007 at 4:28 am

Danielle Fong says:

“The real issue is in the validity of the condensed matter description, and it is complicated and not amenable to sweeping hand-wavy arguments.Agreed 100%, and just to keep the topic going, what textbooks in your opinion do the best job of explaining and justifying the standard models and approximations of condensed matter theory?

Admittedly, such opinions are highly subjective … I was personally impressed by Leggett’s new textbook mentioned above, but would be quite interested in other people’s opinions.

The reason for my interest is that the one of the first major career-determining decisions that young people make is their choice of favorite textbooks. IMHO, it is best to “date around” before starting a “serious relationship!”

Comment #59 April 19th, 2007 at 5:17 am

The van Enter paper on the RG deals with a known pathology, that tracing out all but certain block spins can lead to the introduction of long-range interactions between the block spins if the temperature is below the critical temperature. However, every example in the paper deals with systems which are below the critical temperature (there is some stuff dealing with systems in greater than 4d, but that seems again to be a little artificial though I haven’t read it carefully) and the main application of the RG is to systems at criticality. So, while this is interesting, it doesn’t seem to rule out the main application of RG.

The standard argument is that at criticality, if you select out certain block spins, say one spin in each block of 4 spins in a 2d lattice, and trace out the others, then the effective interaction of the block spins is some new Hamiltonian, with an interaction range that depends on the correlation length of the system containing the remaining 3/4 of the spins. However, that correlation length should be finite, since the entire system is at criticality (it is just barely starting to order), and so using only 3/4 of the spins will give something that is above criticality. If you instead start with a system in the low temperature regime, than this can break down and you can introduce long-range interactions for the block spins.

I think it would be very valuable for mathematicians to consider this more carefully, but RG should definitely not be dismissed!

Comment #60 April 19th, 2007 at 6:46 am

Matt: It is correct that the examples in the can Enter et al paper they are staying away from the critical point. However, later papers building on this work has given numerous other examples. Some are examples of RG transformations which are pathological in parts of the coupling/field plane where the models are known to be very nicely behaved, e.g. there are such examples for the potts model in the high-temperature region.

There are also examples of RG-transformations for the 2D-Ising model for which one can rigorously prove that they are pathological in a region which contains the critical point of the model, and part of the high-temperature domain.

One way of view all this is that spin models are much more interesting than the RG-methods makes one think! Rather than being very simplistic models easily described by RG-fixed points there are lot of interesting things going on.

At this time the RG can not be trusted as a tool in statistical mechanics but it has led to the discovery of many other interesting phenomena.

Some people have also thought about making RG work without hamiltonians, working with the Gibbs-measures directly instead, but as far as I know, this has not yet worked out.

Comment #61 April 19th, 2007 at 8:37 am

Can you give an example of such a situation, Klas? My suspicion, without knowing the papers you’re referring to, is that these might be RG transformations that are not ones that I’d normally use. My other suspicion is that if I did accidentally use one of these RG transformations, I’d notice the problem fairly quickly (assuming I paid attention) because the couplings that resulted after performing the transformation did not decay rapidly.

That is, while one can’t blindly ignore possible problems and assume any old RG transformation is fine, I think if one is a little bit careful one won’t run into problems.

Comment #62 April 19th, 2007 at 9:38 am

Matt: I am not up to date on what the most recent results here are but you can find the results I have mentioned, and many further references in these two papers by van Enter

http://citeseer.ist.psu.edu/156619.html

http://citeseer.ist.psu.edu/25189.html

For the Ising model example one uses a block-spin transformation with rectangular-blocks.

Many of the RG transformations for which one can prove pathological behaviour are certainly not “natural” as far as what one would use in physical calculations, but rather one which are chosen to make the proofs easier.

One of the tricky things here is that even though an RG transformation might be local in its definition it can still give rise to non-local correlations.

Comment #63 April 19th, 2007 at 11:39 am

Thanks for the links, Klas. I only looked briefly at the first paper, but it’s quite interesting. And I would be willing to call the RG transformations they consider “natural”. However, there’s another caveat with their result, as to why it may not apply to the kinds of calculations physicists do, that the authors do discuss. I’m sure you’re familiar with it, but I’ll mention it here.

For simplicity, consider a finite size system. Do a decimation RG transformation: trace out all the spins except for one spin in each block. For a finite size system, this gives some new Hamiltonian for the remaining spins. However, the interactions might be long-ranged. For calculational purposes, we would like to truncate the interactions beyond a certain range (so that the transformation can be iterated), and so we would like the error in the truncation to be small. Thus, it seems like we need short-range interactions. The example given in the first paper, however, shows that for at least one model (q state Potts model, q large), there exist configurations of the remaining spins such that the interactions are not short-range even above the transition temperature. This is because if the remaining spins are fixed to certain values, then there is a new phase transition which arises at a _higher_ temperature than the old one. However (and here’s the caveat) this is a very special configuration of the remaining spins (probability of it is exponentially small in system size). Thus, probably all that is needed is a weaker notion of short-range interactions: we need that the error involved in dropping the interactions beyond a certain range is small, which occurs if either those interactions are small for all spin configurations, or if they are small for “most” spin configurations.

So, I still don’t buy that that invalidates the RG. I think what is needed is some work by mathematicians on the harder problem of proving that the RG does work in the large number of cases in which it does (I mean, one can do RSRG numerically and accurate results for exponents, too accurate to just be chance).

Comment #64 April 19th, 2007 at 1:59 pm

Well it is certainly not the case that one has proven that all RG transformations are badly behaved, or that it is impossible to modify them to make them work. But we do have many examples which we know to be badly behaved.

Whether RG gives good numerical results or not is really an open question since we don’t know the correct answers and rarely have any provably accurate bounds either. Also, one often refers to scaling assumption when making exponent estimates from numeric, but the main reason for beliveing in scaling is actually RG, so this is close to circular reasoning.

One often cites error bounds in simulation and numerical papers but these are usually just statistical bound, or just guesses. The boostrap percolation result gives a good example of how wrong one can go here. Another good example is the 2D Ising model on a square grid. In order to give an estimate for the critical exponent alpha of this model, which is known to be 0, which is less than 0.01 one would have to work with system many orders of magnitude larger than anything which fits inside a modern computer. Thanks to having the exact solution to this model we can both check the answer and have control over how fast it converges to its limit.

Again, this does not prove that simulations and other numerics are wrong, it just shows that we might not actually know as much in this area as many would like to believe.

Comment #65 April 19th, 2007 at 5:19 pm

hey scott, i think this writeup from an otherwise-reputable site could use some quality aaronson flaming:

http://computer.howstuffworks.com/quantum-computer.htm

(they seem like the type who might even listen to constructive criticism!)

Comment #66 April 19th, 2007 at 8:38 pm

Matt and Klass, thanks for a really fine debate. I’m learning quite a lot, listening to you two. Scott and other complexity theorists: your job, if you decide to accept it, is to use RG and other scaling ideas used in physics to study the “phase transition” from P to NP. You might also contribute to making RG more rigorous while you are at it.

Comment #67 April 20th, 2007 at 7:26 pm

rrtucci says:

Your job, if you decide to accept it, is to use RG and other scaling ideas used in physics to study the “phase transition” from P to NP.That’s a fine idea IMHO (seriously). But time-consuming.

Just to keep this fine blog gently stimulated (while Scott is traveling) maybe folks would enjoy a simpler intellectual endeavor. Like Artist or ape?

My score: 83%. Not so good … about 1/3 of people score a perfect 100%.

The link to RG physics? The geometry of the paintings is self-similar!

Comment #68 April 21st, 2007 at 6:50 am

new article on The Donald

http://www.stanfordalumni.org/news/magazine/2006/mayjun/features/knuth.html

Donald Knuth:

“Like a poet has to write poetry, I wake up in the morning and I have to write a computer program”

dedication of first volume of The Art of Computer Programming: “This series of books is affectionately dedicated to the Type 650 computer once installed at Case Institute of Technology in remembrance of many pleasant evenings.”

Comment #69 April 21st, 2007 at 8:30 am

rtucci quotes Donald Knuth:

“Like a poet has to write poetry, I wake up in the morning and I have to write a computer program”Two more Knuth quotes, from the introduction that Knuth wrote for the book

A=B. By the way,“A=B”is a pretty good candidate for the best … mathematical … book … title … ever! And, the book is available on-line, for free.Anyway, the Knuth quotes follow:

“Science is what we understand well enough to explain to a computer. Art is everything else we do..”

“Science advances whenever an Art becomes a Science. And the state of the Art advances too, because people always leap into new territory once they have understood more about the old.”

Comment #70 April 21st, 2007 at 6:55 pm

Since Scott is absent this week, I invited a guest lecturer to give a talk for the prestigious “Quantum Computing Since Democritus” series. I asked the speaker to discuss the precedents, if any, of complexity theory in physics. After all, if complexity theory is so damn important, it must have shown up in physics, incognito, long ago. The guest lecturer is an old friend of mine (in my dreams), and he readily acceded to give this talk (well, okay, the Nobel committee asked him first). Without further ado, here is Ken Wilson’s Noble Lecture, awarded to him for his contributions to the understanding of the renormalization group and critical phenomena:

http://nobelprize.org/nobel_prizes/physics/laureates/1982/wilson-lecture.html

Comment #71 April 21st, 2007 at 7:09 pm

“Without further ado, here is Ken Wilson’s Noble Lecture, awarded to him for his contributions to the understanding of the renormalization group and critical phenomena:”

Sorry for my poor English. I meant

Ken Wilson won the 1982 Noble Prize for his contributions to the understanding of the renormalization group and critical phenomena. Without further ado, we present to you his Nobel lecture:

Comment #72 April 22nd, 2007 at 8:28 am

“Noble” should be the eponymous “Nobel” in the above comment.

When I last spent time with Ken Wilson, at an East Coast conference, he had shifted his focus from RG and related topics to the political and structural problems in reforming American education.

Comment #73 April 22nd, 2007 at 3:13 pm

““Noble” should be the eponymous “Nobel” in the above comment.”

That’s why I’ve taken over Scott’s doofus column. I’m clearly more qualified than he is

Comment #74 May 7th, 2007 at 8:19 pm

[…] Scott Aaronson has a basic article on electromagnetism which makes one of the great basic mistakes: if you have a bunch of electrons going through a wire, then the energy scales like the number of electrons times the speed of the electrons squared. […]