Retiring falsifiability? A storm in Russell’s teacup

My good friend Sean Carroll took a lot of flak recently for answering this year’s Edge question, “What scientific idea is ready for retirement?,” with “Falsifiability”, and for using string theory and the multiverse as examples of why science needs to break out of its narrow Popperian cage.  For more, see this blog post of Sean’s, where one commenter after another piles on the beleaguered dude for his abandonment of science and reason themselves.

My take, for whatever it’s worth, is that Sean and his critics are both right.

Sean is right that “falsifiability” is a crude slogan that fails to capture what science really aims at.  As a doofus example, the theory that zebras exist is presumably both “true” and “scientific,” but it’s not “falsifiable”: if zebras didn’t exist, there would be no experiment that proved their nonexistence.  (And that’s to say nothing of empirical claims involving multiple nested quantifiers: e.g., “for every physical device that tries to solve the Traveling Salesman Problem in polynomial time, there exists an input on which the device fails.”)  Less doofusly, a huge fraction of all scientific progress really consists of mathematical or computational derivations from previously-accepted theories—and, as such, has no “falsifiable content” apart from the theories themselves.  So, do workings-out of mathematical consequences count as “science”?  In practice, the Nobel committee says sure they do, but only if the final results of the derivations are “directly” confirmed by experiment.  Far better, it seems to me, to say that science is a search for explanations that do essential and nontrivial work, within the network of abstract ideas whose ultimate purpose to account for our observations.  (On this particular question, I endorse everything David Deutsch has to say in The Beginning of Infinity, which you should read if you haven’t.)

On the other side, I think Sean’s critics are right that falsifiability shouldn’t be “retired.”  Instead, falsifiability’s portfolio should be expanded, with full-time assistants (like explanatory power) hired to lighten falsifiability’s load.

I also, to be honest, don’t see that modern philosophy of science has advanced much beyond Popper in its understanding of these issues.  Last year, I did something weird and impulsive: I read Karl Popper.  Given all the smack people talk about him these days, I was pleasantly surprised by the amount of nuance, reasonableness, and just general getting-it that I found.  Indeed, I found a lot more of those things in Popper than I found in his latter-day overthrowers Kuhn and Feyerabend.  For Popper (if not for some of his later admirers), falsifiability was not a crude bludgeon.  Rather, it was the centerpiece of a richly-articulated worldview holding that millennia of human philosophical reflection had gotten it backwards: the question isn’t how to arrive at the Truth, but rather how to eliminate error.  Which sounds kind of obvious, until I meet yet another person who rails to me about how empirical positivism can’t provide its own ultimate justification, and should therefore be replaced by the person’s favorite brand of cringe-inducing ugh.

Oh, I also think Sean might have made a tactical error in choosing string theory and the multiverse as his examples for why falsifiability needs to be retired.  For it seems overwhelmingly likely to me that the following two propositions are both true:

1. Falsifiability is too crude of a concept to describe how science works.
2. In the specific cases of string theory and the multiverse, a dearth of novel falsifiable predictions really is a big problem.

As usual, the best bet is to use explanatory power as our criterion—in which case, I’d say string theory emerges as a complex and evolving story.  On one end, there are insights like holography and AdS/CFT, which seem clearly to do explanatory work, and which I’d guess will stand as permanent contributions to human knowledge, even if the whole foundations on which they currently rest get superseded by something else.  On the other end, there’s the idea, championed by a minority of string theorists and widely repeated in the press, that the anthropic principle applied to different patches of multiverse can be invoked as a sort of get-out-of-jail-free card, to rescue a favored theory from earlier hopes of successful empirical predictions that then failed to pan out.  I wouldn’t know how to answer a layperson who asked why that wasn’t exactly the sort of thing Sir Karl was worried about, and for good reason.

Finally, not that Edge asked me, but I’d say the whole notions of “determinism” and “indeterminism” in physics are past ready for retirement.  I can’t think of any work they do, that isn’t better done by predictability and unpredictability.

183 Responses to “Retiring falsifiability? A storm in Russell’s teacup”

  1. Sean Carroll Says:

    I think it speaks to how long I’ve been on the internet that it didn’t seem like a lot of flak. Just another day at the office.

    Certainly, string theory (or approaches to quantum gravity more generally) and the multiverse do pose special problems when it comes to judging their ability to account for our experience of the world. It would be awesome to have a thoughtful and nuanced discussion of those special problems! Instead, we get somber pronouncements about non-falsifiability from fuddy-duddies who think that this puts the matter to rest, which require an occasional slap-down in the form of a pithy response to an EDGE question.

  2. Some dude Says:

    > I also, to be honest, don’t see that modern philosophy
    > of science has advanced much beyond Popper in its
    > understanding of these issues.

    Why do you think you are more qualified to judge this than a philosopher is qualified to judge the current state of complexity theory?

    In particular, why do you think that your two “doofus” examples have not been considered and debated by philosophers? Maybe even considered trivial, as so many laypeople sound when they start talking about complexity theory?

  3. Thomas Shaddox Says:

    “for every physical device that tries to solve the Traveling Salesman Problem in polynomial time, there exists an input on which the device fails.”

    That one’s falsifiable. Just show a physical device that tries to solve the TSP in polynomial time and prove that it succeeds on all inputs. Even if you assume the set of inputs is infinite, provable correctness is conceivable.

  4. Thomas Shaddox Says:

    I don’t think the zebra example is a valid criticism either, because the statement that zebras exist is a direct observation (or, if you will, an interpretation of an observation). It’s not really a hypothesis. If a scientist really wanted to phrase it as an experiment that other scientists can attempt to replicate, he or she probably would naturally construct a hypothesis in such a way that it would be falsifiable, like “If you go to a certain place (perhaps somewhere in Africa, or a zoo), you will observe a horse-like creature with black and white stripes, that matches the description of what people commonly refer to as a ‘zebra.'” If the scientist phrased the hypothesis to be deliberately unfalsifiable, like “Zebras exist,” any other scientist who had never observed a zebra probably would reject this as unscientific.

    The same applies for other claims regarding existence. Before the 19th century, the claim “a large body exists orbiting outside the 8 planets’ orbits” is on its own unfalsifiable and unscientific, but “if you point your telescope at this section of the sky at this time, you will observe a large body” is absolutely falsifiable. The unfalsifiable claim would never have led to actual knowledge via observation, because it’s not even a hypothesis and is thus not actionable.

  5. Or Meir Says:

    Personally, I find that Kuhn has contributed a lot more to my understanding of science than Popper.
    The philosophy of Popper on eliminating errors is very interesting and illuminating, but has little to do with how science is practiced in the real world. Kuhn gives a description of science that I find much closer to reality.

  6. Scott Says:

    Some dude #2:

      Why do you think you are more qualified to judge this than a philosopher is qualified to judge the current state of complexity theory?

    Because when you become a philosopher as opposed to a scientist, you accept as an occupational hazard that outsiders can take positions in philosophical debates just as you can—and indeed, that whatever position those outsiders take, they can doubtless find a faction of professional philosophers who agree with them, because your field is not one that normally reaches consensus. I read some of Popper, and also some of Kuhn and Feyerabend, and the latter two struck me as a very clear step backwards from the former. Popper, at least, is trying to understand how the scientific progress that we can plainly see happening is possible, whereas Kuhn proposes a theory that’s incapable in principle of explaining that progress. And Feyerabend is just throwing bombs and trying to piss people off.

    Science differs by being cumulative: in complexity theory, for example, I don’t have to make up my mind whether I subscribe to the early views of Hartmanis and Edmonds or the later views of Razborov and Arora; there’s no incompatibility between them. (Thus, we might say that ironically, while Kuhn’s and Feyerabend’s ideas fail badly at describing how science evolves, they work much better at describing how the philosophy of science evolves!)

    Incidentally, no, I don’t think Popper (for one) would’ve been fazed in the slightest by my “doofus” examples: his views were a lot more nuanced than that. The doofus-examples were aimed at the Internet invokers of “falsification” as a magic mantra—i.e., the same people Sean Carroll was reacting against.

  7. Raoul Ohio Says:

    Essential background for the “Zebras Exist” issue is the classical “Mathematical Theory of Big Game Hunting”:

    http://math.ucdenver.edu/~wcherowi/mathmajor/archive/catchlion.pdf

    As I recall, I was first made aware of this seminal work by a note in “Advanced Calculus” by Buck.

    This theory has been highly developed over the decades. A summary of TCS aspects can be found in:

    http://komplexify.com/epsilon/category/lion-hunting/

  8. Sid Says:

    One philosopher of science who I think really ‘gets it’ is Peter Galison. Here is a quote from his ‘How Experiments End’ that has a lot of insight:

    “Textbooks do not tell you that groups of physicists gather around the table at CERN stamping OUT and IN on event candidates. This may be due to the persistent myth that, at least at the level of data-taking, no human intervention ought to occur in an experiment or, if it does occur, that any selection criteria should conform to rules full specified in advance. But here, as everywhere in the scientific process, procedures are neither rule governed nor arbitrary. This false dichotomy between rigidity and anarchy is as inapplicable to the sorting of data as it is to every other problem-solving activity. Is it so surprising that data-taking requires as much judgment as the correct application of laws or the design of apparatus.”

  9. Nex Says:

    Scott: “Finally, not that Edge asked me, but I’d say the whole notions of “determinism” and “indeterminism” in physics are past ready for retirement. I can’t think of any work they do, that isn’t better done by predictability and unpredictability.

    So how do they differ?

  10. Neil Says:

    “…the question isn’t how to arrive at the Truth, but rather how to eliminate error.”

    That is a spot on description of how knowledge progresses. And falsifiability is a tool for that purpose, along with logical consistency and Occam’s razor. For what it is worth, the multiverse is not particularly useful in that regard, imo.

  11. srp Says:

    Also good, sensible, interesting, and insightful on philosophy of science: Michael Polanyi, Peter Galison (more a historian than a philosopher, but all over these issues in an indirect way), Ian Hacking. Laurence Laudan (more of a philosopher’s philosopher), and Harry Collins (a sociologist by training who does what amounts to anthropological studies of physicists and astronomers). Don’t expect much agreement about, or even direct confrontation of, the same issues among this wildly diverse group, though! They’re just folks I’ve found useful.

  12. Rahul Says:

    When I read the question “What scientific idea is ready for retirement?,” mentioned on another blog (Marginal Revolution) I instinctively knew this had to be yet another cringe-inducing Edge feature.

    They excel in asking deep sounding very abstract questions to which academics give verbose rambling vague answers to. Very philosophical but often low on practical information content.

  13. David Says:

    I find it odd that Kuhn and Feyerabend come to mind as the best critics of Popper. To my mind, better criticisms can be found from the likes of Quine and Lakatos.

    The fundamental finding by Quine, in his work Two Dogmas, is that it is not really possible to confirm or falsify individual statements because of the possibility for compensatory hypotheses. At the crudest, the experimenter may be hallucinating the results. But many other much less crude and more feasible compensatory hypothesis are possible and are often used in practice.

    Lakatos came up with the notion of productive research programmes. His view is that all theories are refutable, and in fact they are often refuted, but not abandoned as the result of simple exemplars. Often data are regarded as anomalous, and sometimes when a good enough theory and a small enough datum disagrees we should believe the theory rather than the datum. Research programmes are characterised by their core commitments, and a series of auxiliary hypotheses that can be abandoned without abandoning the programme as a whole.

    Lakatos is, I think, also correct that Popper’s view on crucial experiments refuting particular theories is rather an idealisation, in that what matters fundamentally is the productivity of the given research programmes that are in confrontation. It’s fine to blame geocentrists for being silly today, but until Kepler they had a more predictive theory. Adding epicycles is a sign that a research programme may be on the wrong track, but Copernican circular orbits were even less accurate and needed epicycles of their own. So what happens when the two research programmes, geocentrism and heliocentrism are in some sense refuted? Heliocentrism needed time to develop, and it eventually did and produced a more accurate version of itself, without giving up its central claim.

  14. Lubos Motl Says:

    I must say that this is a wise take on the issue – with the exception of the populist nonsense about “problems with falsifiability in string theory”. It’s apparently “tactical” to write these lies to please your deluded anti-string readers. I seem to be the only blogger who is working hard to exterminate them.

  15. Scott Says:

    Rahul #12:

      I instinctively knew this had to be yet another cringe-inducing Edge feature.

      They excel in asking deep sounding very abstract questions to which academics give verbose rambling vague answers to. Very philosophical but often low on practical information content.

    For me, the problem with the Edge questions isn’t the lack of “practical information content” (if I want practical information, I go to babycenter.com, not to Edge); rather it’s that, no matter which question is asked, almost all the respondents use it to say the same things they would’ve said in response to any other question. Typically, I read the responses of Richard Dawkins, David Deutsch, Steven Pinker, Rebecca Goldstein, Sean Carroll, and maybe a half-dozen others, and skip the rest.

  16. Scott Says:

    Or #5: OK, I have to bite. What, specifically, has Kuhn contributed to your understanding of science? You say you find his description “much closer to reality”; does that mean you could give examples of Kuhnian principles at work in complexity theory? (Here I don’t count taking anything big that happened, like PCP, and labeling it a “paradigm shift.” The defining, Kuhnian aspects would have to be that the new paradigm was chosen over the old one for non-rational reasons, and also that the old paradigm’s concepts were totally incomprehensible to people trained in the new paradigm.)

  17. Rahul Says:

    Scott:

    “if I want practical information, I go to babycenter.com, not to Edge”

    Yeah, ok my bad. I didn’t mean that kind of practical.

    Everything’s relative. In the context of an Edge question, your blog is virtually a practical handyman’s guide (Mean it as a compliment) .

    A lot of Edge replies seem like a group of philosophers splitting hairs over some abstruse semantic issue or a arcane contradiction that matters little. Lot of it seems intensely meta or argument for the sake of argument.

  18. Life of Brian Says:

    Scott:
    “Popper, at least, is trying to understand how the scientific progress that we can plainly see happening is possible, whereas Kuhn proposes a theory that’s incapable in principle of explaining that progress. And Feyerabend is just throwing bombs and trying to piss people off.”

    Nice summation of how I tend to see these three, especially Feyerabend! I wonder if he actually believed some of the stuff he wrote and he strikes me as someone who enjoyed getting a rise out of people first and foremost. (From what I’ve heard, he was quite the enjoyably cantankerous character!) Plus, be probably wanted to rebel against his former teacher Popper as much as possible so that he could grab some of his own limelight. Who knows?!?

    I also tend to agree that Kuhn and Feyerabend are a step backwards from Popper and precisely for this reason: the latter seem to deny that scientific progress even exists. Instead it’s all just historical paradigms and no one can say that one paradigm is better than any other. {I may be simplifying in Kuhn’s case, but I’d put money on this being true to Feyerabend’s stated position}.

    I am glad someone (David) mentioned Lakatos: I found him to be a far more reasonable critic of Popper. Lakatos corrects some of Popper’s naive falsificationism and paid more attention to how science was practiced historically, while still accounting for the progressive nature of scientific knowledge and not falling into some historicist-relativist morass.

  19. Alex Says:

    It seems to me completely obvious taht Zebra example does not work.
    “Zebras exist” is not a theory, it is an observable fact. “Dragons exist” is not a theory neither.
    “Zebras exist because N millions years ago a random mutation happened which gave them stripes” is a theory.

  20. Scott Says:

    srp #11, David #13, and Brian #18: Thanks for mentioning other philosophers! As I said in my MIRI interview a month ago, I think there are contemporary philosophers of science who have useful and interesting things to say—about quantum mechanics, thermodynamics, the arrow of time, cosmology, probability, indexicality, causality, explanation, and all sorts of other topics. And it’s possible as well that there’s been progress on the “demarcation problem.” I haven’t read enough to say—I only feel confident in saying that whatever progress there was, probably wasn’t made by Kuhn, Feyerabend, or anyone who thought similarly to them. 🙂

    As for Lakatos, I haven’t studied him sufficiently either. I did start reading Proofs and Refutations, but I got annoyed by the ease with which those afflicted with aggressive doofosity could distort Lakatos’s arguments to say that “math is just as subjective as literary criticism, since no one can ever agree on the definitions.” The truth, of course, is that you can agree on mathematical definitions, after more or less time (Lakatos, conveniently for him, picked one of the cases where it took more time), and the possibility of such agreement makes math rather unusual among human endeavors.

  21. Scott Says:

    Thomas Shaddox #3, #4 and Alex #19: See, these sorts of verbal acrobatics remind me of Jeopardy!: “phrase your answer in the form of a question, or it doesn’t count!” “phrase your hypothesis in the form of a universally-quantified first-order sentence, or it’s not scientific!” To me, it seems childish to say that (e.g.) the hypothesis of a large additional planet in our solar system, only falls within the scope of science once you specify the actual coordinates to point your telescope at. This sort of semantic game only deepens my appreciation of what it is that Sean Carroll was arguing against.

    Thinking about it like a mathematician, I see no reason whatsoever to privilege statements with a single universal quantifier, over statements with any other combination of quantifiers. (For one thing, I’d like the set of possible scientific hypotheses to be closed under simple logical operations, which they wouldn’t be under the rules you suggest.)

    Thomas’s attempted shoehorning of the hardness of NP-complete problems into single-universal-quantifier format illustrates my point well:

      Just show a physical device that tries to solve the TSP in polynomial time and prove that it succeeds on all inputs. Even if you assume the set of inputs is infinite, provable correctness is conceivable.

    For one thing, we can only prove a theorem about an abstract device, not a physical one. For another, even for abstract devices, I’d say it makes perfect sense to ask whether or not such a device exists, regardless of whether or not its correctness is provable in some particular formal system. We simply get what logicians call a Σ2 sentence (“there exists … for all …”), which is a perfectly respectable type of sentence.

    For me, saying that Σ2 sentences should only be entertained by Science once we recast them in Π1 form, is just as arbitrary as saying that they should only be entertained once we write them without using the letter “e.” And I’m completely with Sean Carroll in rebelling against any arbitrary restrictions of that kind (incidentally, I feel the same way about intuitionism and finitism in math).

  22. quen_tin Says:

    .”Because when you become a philosopher as opposed to a scientist, you accept as an occupational hazard that outsiders can take positions in philosophical debates just as you can—and indeed, that whatever position those outsiders take, they can doubtless find a faction of professional philosophers who agree with them, because your field is not one that normally reaches consensus”

    This is not true, and actually a big problem in how many non-philosophers view philosophy. Most naive views on any philosophical problems have been simply rejected and there is consensus on that. Only more sophisticated and nuanced versions might survive, but coming as an outsider and simply asserting a naive view without knowing the whole debates on the issue is really a big problem, and really annoying for philosophers, just as annoying as would be an outsider who would claim “I refuted Einstein” to a physicist.

    Regarding Popper there has been much discussions on the topics of epistemic values which drive scientific progress and there is consensus on the fact that falsifiability is not the only one. Popper himself moved to other criteria such as corroboration. There is also simplicity (already highlighted by Popper who attempted to relate it to falsifiability) and explanatory power for example. It seems to me that falsifiability as the sole criteria for scientific progress is already long retired

  23. Scott Says:

    Nex #9:

      Scott: “Finally, not that Edge asked me, but I’d say the whole notions of “determinism” and “indeterminism” in physics are past ready for retirement. I can’t think of any work they do, that isn’t better done by predictability and unpredictability.

      So how do they differ?

    The easiest way to see how they differ is to consider claims like the following:

    1. “Bohmian mechanics achieves something amazing and overwhelmingly important: namely, it makes quantum mechanics completely deterministic! Of course, the actual outcomes of actual measurements are just as unpredictable as they are in standard QM, but that’s a mere practical detail.”

    2. “Hey, don’t forget Many-Worlds, which also makes QM completely deterministic! Of course, the ‘determinism’ we’re talking about here is at the level of the whole wavefunction, rather than at the level of the actual outcomes you actually experience in your little neck of the woods.”

    3. “Yes, I agree with you that because of chaos, the No-Cloning Theorem, or whatever else, human decisions might be fundamentally impossible to predict beyond some fixed accuracy, even in principle. But if the laws of physics governing the brain are deterministic, then such ‘practical considerations,’ even if true, couldn’t possibly have any bearing on the philosophical conundrum of free will.”

    I’d like to propose, for your favorable consideration, a blanket principle that rejects every invocation of “determinism” like the three above. Summoning John Sidles, let me give my principle a portentous name:

      Aaronson’s Exorcism of Laplace Demons

      No form of determinism has “fangs,” or is worth wasting any brain cycles on, unless it can be “cashed out” into actual predictions, by some imaginable device consistent with the laws of physics.

    The argument for my exorcism principle is this: no matter what the laws of physics were, we could always recast them that so they looked “deterministic,” by the cheap trick of adding hidden variables until determinism became true! I.e., we could always define “the state of the universe” to include a complete specification of everything that will happen in the future—in which case, of course the current state would determine the future! But all we’ve done that way is to shift variables from the “future” column into the “present” column; we haven’t done any actual work to render Nature more predictable. We might has well have simply declared that the future is “determined” by the unknowable mind of God, and stopped there.

    In summary, determinism strikes me as a totally elastic concept. We can always achieve it by just redefining what we mean by a system’s “state,” and that’s not so far from what’s actually done (e.g.) in Bohmian mechanics or in many free-will discussions. But this very elasticity makes determinism scientifically toothless by itself. So, that’s why I suggest that we drop the concept of “determinism” entirely from these sorts of discussions, and just talk directly about what is and isn’t predictable (or at least, about what is or isn’t determined by what).

  24. Scott Says:

    quen_tin #22: I completely agree with you that philosophy can make progress—see here for a whole interview about that topic. I also accept your points that Popper’s own views became more nuanced over time, that philosophers of science did reach consensus in rejecting naive versions of falsifiability, and that it’s annoying when amateurs (e.g.) defend naive versions of falsifiability in total ignorance of that history. Thank you for that.

    On the other hand, I stand by my position that the sequence Popper→Kuhn→Feyerabend seems like an intellectual devolution: from saying imperfect but sensible and more-or-less right things, to saying importantly wrong things, to saying outrageous things with no concern for whether they’re right or wrong. And when I see intellectual communities that celebrate Kuhn’s and Feyerabend’s “transcending” of Popper’s narrow views, it does little to improve my impression of those communities. Interestingly, while I accept the points in your comment as valid, none of them strike me as inconsistent with this position.

  25. quax Says:

    Scott wrote:

    Less doofusly, a huge fraction of all scientific progress really consists of mathematical or computational derivations from previously-accepted theories—and, as such, has no “falsifiable content” apart from the theories themselves. So, do workings-out of mathematical consequences count as “science”?

    It really surprises me that this is even an issue. I honestly think this may intrinsically stem from a shortcoming of the English language.

    As soon as I was introduced to Mathematics I was told that it is a “Geisteswissenschaft”. Now the latter is usually translated to “Humanities”. Which is a pretty poor fit when it comes to math. The German term implied pure science of the mind (Geist), i.e. non empirical. That fits Mathematics quite nicely. It probably is the most rigorous science of pure thought.

    On the other hand Wikipedia introduces humanities as “academic disciplines that study human culture, …”

    One has to be a post-modernist to think this captures what Mathematics is about.

    To make the connection back to the main topic of this blog post, there is no doubt String theory was extremely fruitful for Mathematics. And these results stand on their own, but in terms of empirical predictability if so far failed the most important criteria to be consider good *natural* science. No amount of sophistry can hide this shortcoming.

  26. Scott Says:

    quax #25: Forget about string theory. What about (say) Heisenberg’s uncertainty principle, or Bell inequality violation, or the Casimir effect? Do those count as “natural science,” even though they were purely mathematical derivations from the previously-accepted postulates of QM and QFT?

  27. Gil Kalai Says:

    Hmm, I actually agree with this post.

    I think that we can accomodate scientific theories that cannot be falsified, but this represents a major weakness for a scientific theory.

    Sean writes: “If they prove ultimately too nebulous, or better theories come along, they will be discarded.”

    This replaces “falsifiability” by something more general that we can call “discardibility” or “rejectibility.” The question is how to develop good scientific tools for testing “too nebulous” or “better theories.”

    I also regard mathematics as science (and here I disagree with Sean).

  28. Sid Says:

    I regard falsifiability as a regime in Bayes’ rule. How?

    Bayes: P(H|E) = P(E|H)*P(H)/P(E). H=Hypothesis. E=Evidence.

    If P(E|H) is very small for a particular E and if you observe E, then your hypothesis has a low probability in the light of this evidence, and has been falsified.

    Of course, P(E) could also be very small: which is the case when ALL relevant hypotheses predict that the evidence is very unlikely. In this case, this evidence cannot be used to favor one hypothesis over another.

    Summed over E, P(E|H) =1. So, to whatever set of E it is that H assigns a non-trivial probability, the observation of ~E will falsify H. So, how can a theory be non-falsifiable? This can only be the case when H is fuzzy enough that it doesn’t allow you to calculate P(E|H). See for example, “God did it”. Thus, predictiveness is intimately connected to falsifiability.

    Clearly, since Bayes’ rule has many more parameter regimes, there is much more to science than falsifiability. Is this too naive a view?

  29. quen_tin Says:

    Scott: I tend to agree, although I think Kuhn brought interesting ideas from a more sociological perspective, which are worth pondering. Lakatos also seems to have made an interesting synthesis between Popper and Kuhn : http://coraifeartaigh.wordpress.com/2011/02/11/kuhn-vs-popper-the-philosophy-of-lakatos/?utm_medium=referral&utm_source=t.co
    I was not talking about you specifically when mentioning the ‘outsider’, I was more making a general point about your answer to Some Dude, and the idea that philosophy is not ‘cumulative’…
    (Maybe it is not in the same sense that science is, but precisely, that point deserves consideration…)

  30. Luke Muehlhauser Says:

    > I also, to be honest, don’t see that modern philosophy of science has advanced much beyond Popper in its understanding of these issues.

    I disagree.

    Scott, have you read Howson and Urbach’s *Scientific Reasoning: The Bayesian Approach*? It’s basically the book length version of Yudkowsky’s “A Technical Explanation of Technical Explanation”, though the two publications are not aware of each other:
    http://intelligence.org/files/TechnicalExplanation.pdf

  31. Luke Muehlhauser Says:

    (Both sources explain how the Bayesian account of scientific explanation clarifies and moves beyond Popper.)

  32. quax Says:

    Scott #25: Don’t think that the predictive power of the Heisenberg’s uncertainty principle, the Bell inequality violation, or the Casimir effect have ever been in doubt?

    Pure mathematical deduction from first principles is how physics has been rolling at the very least since Lagrange’s time.

    The problem is what to do with physics that doesn’t offer any testable predictions.

  33. rrtucci Says:

    Come on, come on, no more dilly dallying. Onto to the discussion of Tegmark

  34. Scott Says:

    Luke #30: OK, the growth in sophistication of Bayesian techniques is a good candidate for an advance since Popper that I hadn’t been thinking about. Thanks!

    Does anyone know what Popper’s own views were about Bayesianism?

  35. Life of Brian Says:

    If you’re ever interested in checking out Lakatos again, and want something that you could read in a quick sitting and still get something out of, I’d pick up Matteo Motterlini’s For and Against Method.

    Inside are Lakatos’ “Lectures on Scientific Method” that he gave at the LSE back in 1973. These lectures would have made up the material for a book he was going to write in response to Feyerabend’s Against Method, but he died a year later. The lectures are a whirlwind tour through some important episodes in the history of science and the various problems in the philosophy of thereof. He also lays out where he stands vis-à-vis Popper and Feyerabend quite clearly.

    [An interesting aside that relates to Luke Muehlhauser’s recommendation at #30: in his third lecture, when discussing probabilism and verificationism, Lakatos briefly mentions that he had just learned from Colin Howson that there was a way to construct probability measures in science that was immune to Popper’s (and Ritchie’s) criticisms of using probability in science. Alas, however, it was just a brief aside and Lakatos quickly makes it clear that he wanted nothing to do with probabilism (“It’s still useless!! Just not on the simplistic grounds as believed by Popper and Ritchie!!” he says.)

    BTW, the rest of Motterlini’s book is mostly correspondence between Lakatos and Feyerabend, which I found often entertaining. Both these guys were pretty interesting characters, and Feyerabend just comes across as a practical joker who reveled in getting a rise out of people, which, again, makes me wonder if he really believed some of his own anarchistic views on science!

    P.S. I recently started reading the Howson/Urbach book that Luke recommended…so far, quite good!

  36. srp Says:

    David Deustch in The Fabric of Reality explains the Popperian critique of all forms of inductivism, including Bayesianism, by the parable of the chicken and the farmer: The farmer feeds the chicken every day, and the chicken inductively learns that visits by the farmer are a good thing. Then one day the chicken is fat enough and when the farmer shows up he slaughters it.

    The moral of the story is that all inductive schemes require the inference process to be embedded in a data-generating structure that is both stationary and encompassing of all the possibilities. But we can never be sure that we have encompassed all the possibilities and, as Hume pointed out, we have no independent logical warrant for assuming the future will resemble the past. In Bayesian terms this amounts to the fact that a) events of zero prior probability can occur (this comes up all the time in game theory models with asymmetric information) and b) no description of the prior in a “large world” (to use Savage’s terminology) can be considered to cover the true support of the distribution.

    This fundamental problem with induction was a big piece of the intellectual problem that Popper was attempting to crack with his falsification program. He did this by denying (with Hume) that induction was logically possible and then adding that it was not necessary because we can pick the least-falsified explanation we know of. Obviously this program has its own problems and later philosophers went to town on them, but it’s a pretty clever hack around a long-standing philosophical problem.

  37. Rahul Says:

    @Lubos Motl #14 says:

    “I must say that this is a wise take on the issue – with the exception of the populist nonsense about “problems with falsifiability in string theory”. It’s apparently “tactical” to write these lies to please your deluded anti-string readers.”

    So, you are saying that string theory does not have falsifiability problems? Do you mean there are pragmatic ways to test it or that even if no pragmatic experimental verification is possible there’s other reasons to embrace string theory?

  38. quax Says:

    srp #36 kudos for this very nice and concise argument. Will file that one for future reference.

  39. Lubos Motl Says:

    Rahul: there are hundreds of rock-solid reasons to be near certain that string theory is right and none of these reasons has anything to do with experiments of the last 40 years. The characteristic scale of string theory – or any other hypothetical unifying theory or theory of quantum gravity – is inaccessible to direct experiments which means that the bulk of pretty much any progress is of mathematical nature. Am I really the first one who tells you about this fact?

  40. Scott Says:

    quen_tin #29: I read the link you provided (and some other summaries of Lakatos’s philosophy), and his theory of the “hard core” of a research program surrounded by an “auxiliary belt” of protective hypotheses that keeps changing, but hopefully for rational reasons, actually seems perfectly plausible to me. Certainly more so than Kuhn’s theory of non-rationally-justifiable paradigm shifts.

    In complexity theory, I suppose the “hard core” includes things like the (Quantum) Extended Church-Turing Thesis and P≠NP, while the “auxiliary belt” includes (e.g.) the thesis that the polynomials that arise in practice tend to be low-degree.

  41. Rahul Says:

    @Lubos Motl #39:

    Thanks! Yes, you are the first string theory evangelist I’ve had the fortune(?) to bump into really. I encounter skeptics more often but maybe that only speaks to the kind of company I keep. 🙂

  42. Scott Says:

    rrtucci #33:

      Come on, come on, no more dilly dallying. Onto to the discussion of Tegmark

    Well, I haven’t yet read Max’s new book or his consciousness paper, and don’t think I should write a blog post until/unless I do.

    What I can tell you now is that I find Max a fascinating person, a wonderful conference organizer, someone who’s always been extremely nice to me personally, and an absolute master at finding common ground with his intellectual opponents—I’m trying to learn from him, and hope someday to become 10-122 as good. I can also say that, like various other commentators (e.g., Peter Woit), I personally find the “Mathematical Universe Hypothesis” to be devoid of content.

  43. Cody Says:

    Excellently said, I could not agree more. (I was looking for a secular version of “amen” which apparently just means “so be it,” so I suppose my first sentence conveyed my sentiment much better than this one — though, maybe not?)

  44. rrtucci Says:

    Hmm, Scott, one of your IAP activities could be to write a book report/blog post about Tegmark’s new book and his consciousness is water paper. I’d give you one credit for it.

  45. Tienzen (Jeh-Tween) Gong Says:

    Falsifiability is a convenient tool for science, and it had a very successful career. But, it (Falsifiability) has no epistemological or philosophical basis, as you (Scott Aaronson) have showed vividly with your simple zebra example.

    Scott Aaronson: “… to say that science is a search for explanations that do essential and nontrivial work, within the network of abstract ideas whose ultimate purpose to account for our observations.”

    I am totally agreeing with your above statement. Yet, I would like go beyond it, with three points.
    1. Instead of explanations, I would like use the term of “contacting and anchoring”. There are many known observations, such as the Electron Fine Structure constant, α. And, the following formula makes contact to that known observation.

    Beta = 1/alpha = 64 ( 1 + first order mixing + sum of the higher order mixing)
    = 64 (1 + 1/Cos A(2) + .00065737 + …)
    = 137.0359 …
    A(2) is the Weinberg angle, A(2) = 28.743 degree
    The sum of the higher order mixing = 2(1/48)[(1/64) + (1/2)(1/64)^2 + …+(1/n)(1/64)^n +…]
    = .00065737 + …
    2. After a contact is made, it is called an ‘anchor’. Then, it should be ‘consistent’ with other anchors, such as Amir Mulic’s formula (4π^3+π^2+π) also makes contact to α, then the above contact must consist with this new formula, and this is indeed the case (see http://snarxivblog.blogspot.com/2014/01/numerology-from-m-theory.html?showComment=1390162093301#c9028461187868033862 ).
    3. The third word is ‘encompassing’, that is, an isolated contacting and anchoring is at best a ‘hint’. A genuine contact must link to ‘all’ other contacts. That is, this α contact must be connected to the particle zoo of the Standard Model, to ‘string unification (such as the G-string)’, to ‘dark matter’, to ‘dark energy’, to ‘Weinberg angle’, etc. .

    With ‘contacting, consistency and encompassing’, we can comfortably retire ‘Falsifiability’.

  46. Peter Woit Says:

    Scott,
    Glad to hear that you also find the “Mathematical Universe Hypothesis” devoid of content. This is an example highly relevant to the “falsifiability” question. Are those willing to argue against falsifiability to prop up the string theory landscape also willing to include the MUH and the Level IV multiverse? Or is going from Level II to Level IV too far and they’d be willing to agree this is not science? Will we hear from them publicly on this? More generally, if the physics community agrees that Tegmark’s proposal is empty pseudo-science, will this news get to the public, or does the fact that he’s an eminently nice and reasonable guy mean no one is willing to say anything?

    As for “falsifiability”, of course the issue is not a simple one, as Popper himself was well aware. Sean Carroll doesn’t tell us who the “fuddy-duddies” who make “somber pronouncements” about their naive views on this matter are who need to be “slapped down”. I’m curious who he has in mind, since it certainly seems to me that the theoretical physics community is not presently suffering from an unwillingness to consider highly speculative ideas that are at and sometimes past conventional notions of what is testable science.

  47. Peter Woit Says:

    Last sentence should be “at and sometimes past the conventional boundaries of what is testable science”

  48. David Khoo Says:

    Speaking as a current research scientist, science is the search for tools, not truth. Truth is a desirable — but not necessary — property of a good tool. A map drawn at a scale of 1:1 is perfectly useless. A good map must leave things out — it must lie to us. Its cognitive and practical value lies in the correct application of falsehood. The same thing applies to scientific theories.

    Falsifiability does not actually figure strongly in day-to-day research. Most research does not directly address any explicit hypothesis to begin with, let alone concern itself whether such a hypothesis is falsifiable!

    Scientists are more concerned with solving problems. Anything that gives useful insight into an important problem is afforded respect regardless of falsifiability. The final, beautiful, falsifiable theories that are the visible, celebrated products of science typically spent a long time in gestation as half-baked, incomplete, inconsistent, unfalsifiable messes of half-theories, half-chased dead ends and unconnected facts — it is just how science progresses. If scientists insisted on philosophical purity every step of the way, nothing could get done. “String theory” (really a large family of theories and related work) is an example of this. It is unfalsifiable, unknowable and not ready for primetime, but that is not relevant to its scientific value as long as it provides insight and makes progress. When the sausage has been properly made it will be neat and delicious, but it is neither yet and that’s just fine.

  49. John Merryman Says:

    David,
    Sometimes the map is incomplete due to editing and sometimes it is just as you observe; We don’t know all the territory and the map reflects our incomplete knowledge.
    Epicycles present an interesting example how normal, incremental logic can be misleading. The predictable patterns of the cosmos were related to the motions of wheels and the pattern became the agent. This seems to be the conceptual basis of the mathematical universe, where these designs, the map, are taken to be more foundational than the actual territory. Yet the map is static and the territory is dynamic.
    The question is how pervasive is this tendency? For example measurements of distances and durations are highly inter-related, such as whether a measurement is between peaks of waves, or the rate they pass a mark. Does this mean the ‘fabric of spacetime’ is physically real, or, like epicycles, is it a pattern arising from a more complex territory?
    We experience time as a sequence of events and physics distills this to particular measurements of duration, but the underlaying reality is not the present moment on a physical timeline, but the changing configuration of the physical, creating and dissolving those events. It is action turning future into past. Tomorrow becomes yesterday, as the earth spins.
    All clocks run at their own rates because they are separate actions. If time were movement from past to future, then the faster clock would move into the future more rapidly, but the opposite is true. It ages/burns/processes quicker and so moves into the past more rapidly. The twin in the faster frame dies before the other returns.
    If we travel from a determined past into a probabilistic future, the math suggests it branches out into multiworlds, but if it is simply action taking its course, probabilities collapse into actualities, as future becomes past.
    This would make time similar to temperature, as an effect of action. Both are elemental effects, but subject to circumstance.
    Time is to temperature what frequency is to amplitude.
    Or we can continue to believe in blocktime.

  50. Vitruvius Says:

    While I have the highest respect for Sean, his Edge response, and From Eternity to Here, my favourite Edge response this year was from Marcelo Gleiser, on Unification, in which he notes (and this may be relevant to the discussion here): “We explain the world the way we think about it. There is no way out of our minds. […] Perfection is too hard a burden to impose on Nature. […] We are successful pattern-seeking rational mammals. That, alone, is cause for celebration. However, let us not confuse our descriptions and models with reality. We may hold perfection in our mind’s eye as a sort of ethereal muse. Meanwhile, Nature is out there, doing its thing. That we manage to catch a glimpse of its inner workings is nothing short of wonderful. And that should be good enough.”

  51. simplicissimus Says:

    “determinism and indeterminism”: for me this was always an argument about whether or not to look at probabilities as a “physical reality” or not. I myself do not believe that probabilities can (or should) arise from anything different than suitably averaging an underlying deterministic phenomenon, and as far as my distant memories go, Bell’s inequalities prohibit only local hidden variable theories.
    About Bohm, citing wikipedia (since this is, unfortunately for me, my level of competence), “Bohm said he considered his theory to be unacceptable as a physical theory due to the guiding wave’s existence in an abstract multi-dimensional configuration space, rather than three-dimensional space.[19] His hope was that the theory would lead to new insights and experiments that would lead ultimately to an acceptable one;[19] his aim was not to set out a deterministic, mechanical viewpoint, but rather to show that it was possible to attribute properties to an underlying reality, in contrast to the conventional approach to quantum mechanics.[“

  52. John Duffield Says:

    Vitruvius: I read Marcelo Gleiser’s “Retire Unification” essay and didn’t agree with it. IMHO it’s wrong to roll over and give up on unifying say electromagnetism and gravity. Like Sean Carroll’s essay, IMHO it smacks of giving up on science. See http://www.rain.org/~karpeles/einsteindis.html and note this: “It can, however, scarcely be imagined that empty space has conditions or states of two essentially different kinds, and it is natural to suspect that this only appears to be so because the structure of the physical continuum is not completely described by the Riemannian metric”. Einstein was talking about a field as a state of space. I think that’s an important clue myself.

  53. Scott Says:

    simplicissimus #51: Yes, you’re right that Bell’s theorem only prohibits local hidden variables; it does nothing to rule out nonlocal hidden variables like Bohmian ones. Indeed, since Bohmian mechanics is specifically constructed to reproduce all the predictions of QM, there’s a sense in which nothing can rule it out. However, this total immunity from falsification is also a weakness: it means that the “determinism” you get from Bohm is only a toothless, definitional determinism that you could’ve gotten whatever the laws of physics had turned out to be. This sort of “determinism” doesn’t do anything; it can’t be used to make predictions; it has zero explanatory power. So the whole thing I’m advocating is not even to waste our breath arguing about that cheap kind of determinism, and to move on to the real issue, which is the actual predictability of nature.

  54. Max Tegmark Says:

    Thanks Scott for your all to kind words! I very much look forward to hearing what you think about what I actually say in the book once you’ve had a chance to read it! I’m happy to give you a hardcopy (which can double as door-stop) – just let me know.

  55. Scott Says:

    Max #54: Sure! I’ll trade you a copy of my book for a copy of yours.

  56. Max Tegmark Says:

    Hi Peter Woit: I’m a big fan of falsifiability, and chapter 6 has a long discussion of testability and Karl Popper. Parallel universes are of course not a theory, but a prediction of certain theories which are in turn scientific if they have other testable predictions. Your critique above seems to center only on the Level IV multiverse, not only the material that makes up the bulk of my book ( http://mathematicaluniverse.org ). I just wrote a long post on your own blog about why I find some of your critique unscientific, and I’m very much looking forward to hearing what you think!

  57. Max Tegmark Says:

    OK Scott – deal! Since I’ve already bought yours, I’ll trade your signing it against a copy of mine.
    🙂

  58. Sniffnoy Says:

    I think it might be helpful to clarify here what we mean by “unification”, because I feel like that word is being used here to conflate two different notions.

    The first of these is just that there is some Theory of Everything, some coherent set of laws of physics that is not an approximation but is simply true, and that all physical phenomena could be deduced from with sufficient power. (Although, just because such laws exist, does not mean you can ever conclude with certainty that you’ve identified them.) This is basically just a statement of reductionism, in other words; there are not literally different laws applying at different scales, because that wouldn’t make sense. This is, I would say, a pretty weak statement; it’s the sort of thing you expect will be accepted by pretty much everyone who’s not a fuzzhead.

    The second of these is a more specific statement about the form the actual laws of physics will end up taking, namely — and my apologies to the physicists if I am mangling this — that the four fundamental forces will turn out to somehow be just aspects of one simple unified interaction. (I don’t claim to understand what that means; again, not a physicist. But I assume it’s as opposed to a sort of “well you have to sum the influence of each of the forces” situation.) This is a much stronger claim, which as a non-expert I am a little bit suspicious of. I assume the physicists who promote it have some good reason for doing so, but to me it’s just not obvious that anything like this should be true. (Unless it turns out that all they meant by “unification” was a “consistent set of laws that describes all of physics at once including all four fundamental forces”, i.e. basically just reductionism, but that doesn’t appear to be the case.)

    Let me go into more detail here — as I understand it, the general idea here is not that all four forces are actually unified in the situations we normally encounter, but rather just that they are unified at high energy scales. I had the obvious initial reaction to this: “If they’re unified some of the time but not all of the time… doesn’t that mean they’re fundamentally not unified?” But apparently, no, that’s just the wrong way to think about it — rather, there’s one fundamental interaction, but at low energy scales you get spontaneous symmetry breaking, and the symmetry happens to have (randomly, somehow?) broken in a way so as to produce the four forces that we see. And they’re consistent from point to point because they have to be locally consistent for reasons I don’t understand (which I suppose prevents us from accidentally destroying everything by accidentally changing the local apparent laws of physics whenever we do an experiment that reaches electroweak unification energies[0]), but with inflation you can far away regions of space where they broke differently, so that the apparent laws of physics are different. (Level II multiverse.) And at the boundaries…? Well, something. I don’t know. I’m not going to worry about it; the point is, this is a picture that actually makes sense. Or at least I assume it does.

    The thing is, while such a theory, if true, is certainly worth studying, if one wants to make predictions from it, one needs to not only know the actual laws of physics, but also the information regarding how the symmetry broken in our region. All those free parameters you’ve been trying to get rid of come right back! They’re now just in something more like the initial conditions rather than the law of how conditions evolve.

    So I personally am fine with free parameters — you do some experiments and you figure out what they are, and then they’re not so free anymore. Or you do some experiments and you figure out there’s no consistent setting, and you’ve falsified your theory. What’s the big deal? The objections about string theory’s lack of falsifiability, AFAICT, are basically just “it has an enormous amount of free parameters”. (Quite possibly infinitely many, in that it has free parameters which are not individual real numbers but whole 6-dimensional manifolds or what have you.) Is this really a qualitative difference? That doesn’t sound unfalsifiable, it just sounds like it needs lots of calibration before you can falsify it. The only problem is if you declare the job done before you’ve actually determined those values.

    Of course — as mentioned above — you can get rid of those free parameters with a level II multiverse hypothesis. But many physicists find this unsatisfying and to my mind, rightly so, because once again those free parameters come back as soon as you want to actually predict anything. Or (just as without string theory) you could try to “explain” the values of those parameters via anthropics, and, well…

    I mean, this all just seems so unnecessary. This sort of problem only seems to arise if you insist on explanations for the laws of physics. But if the laws of physics are the most fundamental thing there is, the thing that explains everything else, then asking for some other phenomenon to explain them is silly; then that phenomenon would be the most fundamental thing instead. If you instead say “Yeah, maybe the actual laws of physics are kind of complicated and unnatural and fine-tuned, but that’s just the way it is, and that’s the reason the universe looks the way it does; it’s not as simple as possible because the simplest possibility would be nothingness,” then, well, there isn’t a problem.

    Except of course for actually doing physics and figuring out what the hell the laws are, because we still don’t know them.

    …I think I strayed pretty far from the original topic here. The original point was, yeah, I agree with John Duffield that Marcelo Gleiser’s response smacks of fuzzheadedness and giving up on science, because it seems to be against even the weak notion of unification that is basically just reductionism. But one could also just be against the strong form of unification and that wouldn’t be so bad. (Though I assume the many physicists who expect the strong form of unification to hold have a good reason for doing so. Unless of course it turns out that I’ve been misunderstanding this whole time and there is no strong form, it really does just mean reductionism.)

    [0]Although, one physics grad student friend of mine who I asked about this said that he thought that this might be something special about the electroweak interaction in particular? I don’t know. He wasn’t too clear on the matter either.

  59. dc Says:

    Some people may like the “Mathscape” and Kolmogorov complexity papers below:

    http://arxiv.org/abs/hep-th/0011065 – Strings from Logic

    http://arxiv.org/abs/quant-ph/0011122 – Algorithmic Theories of Everything

  60. Rahul Says:

    @Max Tegmark

    I confess I haven’t read your book but can I request you to elaborate more about this comment you made on Peter Woit’s blog:

    “the possibility that they [Multiverse theories] are making inroads because the supporting arguments are actually correct and new supporting evidence has come to light”

    What exactly is the novel supporting evidence? Can you elaborate?

  61. Max Tegmark Says:

    Dear Rahul – here are three examples:

    1) Observations of the cosmic microwave background
    the Planck satellite etc. have make some scientists take cosmological inflation more seriously, and inflation in turn generically predicts (according to the work of Vilenkin, Linde and others) a Level I multiverse.

    2) Steven Weinberg’s use of the Level II multiverse to predict dark energy with roughly the correct density before it was observed and awarded the Nobel prize has made some scientists take Level II more seriously.

    3) Experimental demonstration that the collapse-free Schrödinger equation applies to ever larger quantum systems appears to have made some scientists take the Level III multiverse more seriously.

  62. Rahul Says:

    Max: Thank you! Good to know, though sadly, I’m not equipped to judge how strong this new supporting evidence is. Maybe I do need to go & buy your book! 🙂

  63. Arun Says:

    Would you care to comment on your friend’s substitute criteria for falsifiability, namely, that good scientific theories are definite and empirical? Would you agree that neither of his examples meets both criteria?

  64. Arun Says:

    Specifically, string theory is definite but not empirical. Multiverses are empirical but not definite.

  65. Arun Says:

    I really hate to see scientific objectivity discarded for friendship.

  66. Scott Says:

    Arun: I’m not even exactly sure what “empirical” and “definite” mean in this context. Nor am I sure that I’d be willing to call string theory “definite” (given the lack of a clear definition of M-theory, and the amount that’s only well-defined in various limits), or the multiverse “empirical” (I guess it depends which kind of multiverse we’re talking about).

    In any case, I gave my own substitute criterion in the post: namely, explanatory power (with falsifiable predictions being one of the best ways to check whether our attempted explanations are working). You can safely infer that I agree with any other criterion to whatever extent it dovetails with that.

    And I don’t know why you insinuate that I’m “discarding scientific objectivity for friendship”: after all, I didn’t hesitate to air my disagreements with Sean in this very post…

  67. Rahul Says:

    About the Zebra’s example I’ve one question: Most non-trivial theories, in my mind, predict at least something that wouldn’t be obvious in the absent of themselves, and that new prediction is potentially empirically observable. e.g. In a pre-zebra era (900 AD London?) if someone posited “Zebras exist” then one could search for one & verify.

    What’s the analog for string theory. Does it predict something that we can measure that we wouldn’t otherwise know in the absence of string theory?

  68. nc Says:

    Can I just point out that Peter Woit himself published a non-falsifiable electroweak theory, see his 1988 paper “Supersymmetric quantum mechanics, spinors and the standard model”, Nuclear Physics, vol. B303, pp. 329-42, or p51 of his http://arxiv.org/abs/hep-th/0206135 where he models the chiral features of the SM’s electroweak charges by picking out a U(2) symmetry as a subset of SO(4). Thus, his objection to “speculation” is only to hype and misrepresentation for publicity and funding, it appears. No harm in speculating, as long as you don’t make misleading claims, and keep a level head.

  69. Rhenium Says:

    An old joke…

    Dean, to the physics department. “Why do I always have to give you guys so much money, for laboratories and expensive equipment and stuff. Why couldn’t you be like the math department – all they need is money for pencils, paper and waste-paper baskets. Or even better, like the philosophy department. All they need are pencils and paper.”

  70. Koray Says:

    Scott,

    When you say “…falsifiable predictions being one of the best ways to check whether our attempted explanations are working…”, does this imply that an explanatory framework need not be necessarily falsifiable? Are there any examples of this?

  71. Anonymous Says:

    Koray: Quantum field theory as an explanatory framework is not any more falsifiable than string theory, as Strassler has spent a number of posts on his blog explaining:

    http://profmattstrassler.com/2013/09/24/quantum-field-theory-string-theory-and-predictions-part-2/

    For a more concrete example, supersymmetry as a general explanatory framework is also not falsifiable, because if we don’t find superpartners at a given energy scale, we can always claim they exist at a much higher scale our of experimental reach. But if we do find superpartners at some scale, then that would make supersymmetry highly believable.

  72. Arun Says:

    My sincere apologies. I was just feeling that Feynman would be weep at what particle physics has become, and who occupies his desk.

    An observation. That the universe underwent a period exponential rate of expansion, inflation, helps tie together a lot of observational data; but, the causal mechanisms we have so far for inflation lead to eternal inflation and multiverses. This suggests that we don’t understand something. An analogy is that our most cherished postulated causal mechanism for the fact that particle masses aren’t all super-high has little experimental support. That we don’t have a good alternate explanation does not mean that the one we have is right – LHC is well on its way to ruling it out. We have to distinguish between inflation, the phenomenon, which is real, and inflation, the theoretical mechanism, which leads to absurdities and is likely wrong.

  73. Vitruvius Says:

    In his A New Way to Explain Explanation talk, David Deutsch makes what I consider to be a most astute claim, closely related to the search for explanations Scott mentions in paragraph three of this page, namely:

    “That the truth consists of hard to vary assertions about reality is the most important fact about the physical world.”

    Doesn’t that present a problem for theories that by their nature allow huge numbers of arbitrary variations? To me those seem pretty easy to vary, the opposite of David’s claim.

  74. Arun Says:

    Quantum field theory as a framework was sort of discarded – became unfashionable – for a while, when progress was slow, because people cared about empirical support for ideas. It turned around (I think) after the discovery of asymptotic freedom.

    I think a perusal of the history of science will show that science progressed each era through pursuing the problems that were tractable in that era – by which I mean tractable in both in experimental and theoretical terms. We have forgotten all the rest. The great scientists were those who had the good taste to figure out the right problems to tackle.

  75. Marshall Eubanks Says:

    Anonymous @71 I think that physics (and science in general) frequently approaches falsifiability asymptotically, similar to the way that science approaches objectivity asymptotically. (And, of course, verifiability is not the same as falsifiability.) Take supersymmetry as an example. It is clearly verifiable (find some supersymmetric partners). Suppose these are not found. Then of course, “we” (for some definition of we) can “always claim they exist at a much higher scale our of experimental reach.” Indeed. But, what will happen is that the original “we” will die out, funding agencies will become uninterested in funding ever expensive repeats of the same unsuccessful experiments, and some bright new person will figure out another theory with testable implications (or, maybe, such a theory will be thrust upon them by some experimental evidence), and everyone will move on. It’s not a true falsification, but that distinction might be lost to grad students in 50 years.

  76. Scott Says:

    Koray #70:

      When you say “…falsifiable predictions being one of the best ways to check whether our attempted explanations are working…”, does this imply that an explanatory framework need not be necessarily falsifiable? Are there any examples of this?

    Yes and yes. There are plenty of great explanatory frameworks in science that are purely mathematical; and that have no “falsifiable content” of their own, apart from that of some background theory (like, say, quantum mechanics) that they rest on top of. Now, if a framework has neither falsifiable predictions nor rigorous mathematics behind it, then I’d tend to look at it with extreme skepticism—but even there, I wouldn’t declare a priori that there can’t possibly be valuable explanations there.

  77. Rahul Says:

    To repeat Koray’s question, what are examples of frameworks that are non-trivially explanatory yet not falsifiable.

  78. Scott Says:

    Rahul #77: The entire field of quantum information could be said to make no falsifiable predictions apart from those of quantum mechanics itself. Likewise for numerical relativity and GR, for combinatorial chemistry and basic quantum chemistry, etc. In fact the USUAL situation in science is that we derive mathematical consequences from already-accepted theories in order to explain stuff; it’s RARE that explanation requires the invention of a whole new theory with new falsifiable content. Of course, any experimental falsification of a mathematical consequence of a theory also falsifies the theory itself, so in that sense, one could say that we’re constantly generating new predictions! (E.g., every quantum algorithm is a “prediction” that quantum mechanics is sufficiently true to enable you to implement that algorithm.) But I didn’t think that sort of thing counted.

  79. Rahul Says:

    Scott #78:

    If you succeeded some day in rapidly factoring a large number using a QC wouldn’t that count as a quantum information explanatory success? Or an accurately & quickly predicted heat of reaction for quantum chemistry (already possible)?

    Prior to the development of either field these non-trivial goals were not possible & hence I call them explanatory & since it’s so possible to get both answers wrong I call them falsifiable.

    Perhaps I’m not getting the nuance you are hinting at?

  80. Scott Says:

    Rahul #79: The nuance you’re not getting is that, in all the examples I gave, once you accept certain postulates that were accepted prior to the framework in question, you’re mathematically obligated to accept the new consequences that the framework talks about. So for example, it’s not as if Peter Shor said, “I have a new hypothesized description of how Nature behaves, and one way of checking whether my hypothesis is correct is to try to use such-and-such algorithm to factor a 10,000-digit number.” Instead he said: “the ability to factor numbers efficiently is a logical consequence of such-and-such postulates, which physicists had implicitly or explicitly accepted for most of the 20th century.” Thus, while it’s conceivable that Shor’s algorithm won’t physically work, its failure wouldn’t falsify the algorithm per se, but much more basic postulates that were already firmly in place by the time the explanatory framework of quantum information came on the scene.

  81. Rahul Says:

    Scott #80:

    I see. Thanks! That does set the bar very high then.

    Maybe we should restrict the falsifiable & explanatory test to pure hypotheses rather than derivative frameworks. The latter (e.g. Shor’s) are a product of pure logic & an underlying hypothesis (say QM). Any fault would necessarily lie in the logic or the base hypothesis.

  82. Koray Says:

    I’m sorry to be so dense, but I find the concept of explanatory frameworks that don’t predict anything quite puzzling.

    If a framework is by definition unfalsifiable, i.e. it fits the existing data perfectly and no other type of new data can be collected to test predictions, then it is more likely that it captures our limits in interacting with the system under study than that it is an explanation of the system itself. If you’re studying light waves using the human eye, you may develop an equation that models the eye so well that it describes how you get any data you can get, without actually being an explanation for light itself.

  83. danishcrows Says:

    Koray #82: I wouldn’t call quantum information an explanatory framework, more of a consequential one.

    David Deutsch talks about the “reach” of explanatory variables, for example the reach of Newtonian gravity which predicted the existence of Neptune versus the Ptolemaic system which predicted nothing (that I know of anyway).

    Quantum information pushes the reach of quantum mechanics by predicting the existence of machines that could, for one, factor large integers in polynomial time. If it turns out it’s not possible then that will be anomoly like the perihelion of mercury, and hopefully we get some Einstein to figure out why.

  84. srp Says:

    Shannon’s information theory is a good example of a useful explanatory framework that is not empirically falsifiable. It is a piece of abstract mathematics whose abstractions happen to have an almost one-to-one correspondence to the real-world things it was intended to describe. I would nominate it as a candidate for the greatest piece of armchair theorizing in history.

    Drawing out the unexpected logical consequences (unexpected to boundedly rational humans) of firmly founded definitions, observations, and laws is a form of explanation. In some cases, just partitioning a system and describing “accounting identities” is a powerful form of explanation, although it’s a good idea to know which relationships are causal and which aren’t. For example, any country with a trade deficit in goods and services has an exactly equal surplus in its capital account and the inverse is true as well. Which of those imbalances causes the other in any specific instance is hard to prove, but it is definitely helpful to understand the identity in trying to come up with explanations.

  85. Dr. Elliot McGucken Says:

    An element that appears to be largely absent from the kindle version of Max Tegmark’s book, as well as ensuing discussions regarding the book, are the views of the Great Physicists on Physics:

    Einstein wrote, “But before mankind could be ripe for a science which takes in the whole of reality, a second fundamental truth was needed, which only became common property among philosophers with the advent of Kepler and Galileo. Pure logical thinking cannot yield us any knowledge of the empirical world; all knowledge of reality starts from experience and ends in it. Propositions arrived at by purely logical means are completely empty as regards reality. Because Galileo saw this, and particularly because he drummed it into the scientific world, he is the father of modern physics—indeed, of modern science altogether.”

    Max Planck stated, “Let us get down to bedrock facts. The beginning of every act of knowing, and therefore the starting-point of every science, must be our own personal experience.” Planck also stated, “That we do not construct the external world to suit our own ends in the pursuit of science, but that vice versa the external world forces itself upon our recognition with its own elemental power, is a point which ought to be categorically asserted again and again . . . From the fact that in studying the happenings of nature . . . it is clear that we always look for the basic thing behind the dependent thing, for what is absolute behind what is relative, for the reality behind the appearance and for what abides behind what is transitory. . this is characteristic not only of physical science but of all science.”

    Einstein was adamant in his contention, “Truth is what stands the test of experience.” He stated thusly without ever having read Popper even.

    Heisenberg would likely not have been a huge fan of the multiverse, writing, “Science. . . is based on personal experience, or on the experience of others, reliably reported. . . Even today we can still learn from Goethe . . . trusting that this reality will then also reflect the essence of things, the ‘one, the good, and the true.” Schrodinger agrees with Hesinberg, stipulating, “The world is given but once. . . The world extended in space and time is but our representation. Experience does not give us the slightest clue of its being anything besides that.”

    Einstein warned us of metaphysics, “Time and again the passion for understanding has led to the illusion that man is able to comprehend the objective world rationally by pure thought without any empirical foundations—in short, by metaphysics.”

    Regarding the infinite complexity of the never-seen strings/infinite multiverses, Einstein wrote, “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius—and a lot of courage—to move in the opposite direction.”

    Even though many would consider him dated, Sir Francis Bacon wrote, “And all depends on keeping the eye steadily fixed upon the facts of nature and so receiving their images simply as they are. For God forbid that we should give out a dream of our own imagination for a pattern of the world; rather may he graciously grant to us to write an apocalypse or true vision of the footsteps of the Creator imprinted on his creatures.”

    The Great mathematician Poincare understood the difference between math and reality, “Geometry is not true, it is advantageous.” –Poincare

    In Disturbing the Universe, Freeman Dyson writes, “Dick [Feynman] fought back against my skepticism, arguing that Einstein had failed because he stopped thinking in concrete physical images and became a manipulator of equations. I had to admit that was true. The great discoveries of Einstein’s earlier years were all based on direct physical intuition. Einstein’s later unified theories failed because they were only sets of equations without physical meaning. Dick’s sum-over-histories theory was in the spirit of the young Einstein, not of the old Einstein. It was solidly rooted in physical reality.” In The Trouble With Physics, Lee Smolin writes that Bohr was not a Feynman “shut up and calculate” physicist, and from the above Dyson quote, it appears that Feynman wasn’t either. Lee writes, “Mara Beller, a historian who has studied his [Bohr’s] work in detail, points out that there was not a single calculation in his research notebooks, which were all verbal arguments and pictures.”

    In his book Einstein, Banesh Hoffman and the great Michael Faraday exalt physical reality over mere math:

    “Meanwhile, however, the English experimenter Michael Farady was making outstanding experimental discoveries in electricity and magnetism. Being largely self-taught and lacking mathematical facility, he could not interpret his results in the manner of Ampere. And this was fortunate, since it led to a revolution in science. . . most physicists adept at mathematics thought his concepts mathematically naïve.”

    Finally, Einstein reminds us, “The most beautiful thing we can experience is the mysterious. It is the source of all true art and all science. He to whom this emotion is a stranger, who can no longer pause to wonder and stand rapt in awe, is as good as dead: his eyes are closed.”

    And,

    “The important thing is not to stop questioning.”

    Well, the glaring question is, “Why are the greats being ignored?”

    Einstein wrote, “When the solution is simple, God is answering.” Should we not be seeking simplicity?

    Einstein reminds us that physics must begin and end in physical reality, “Albert Einstein: Before I enter upon a critique of mechanics as a foundation of physics, something of a broadly general nature will first have to be said concerning the points of view according to which it is possible to criticize physical theories at all. The first point of view is obvious: The theory must not contradict empirical facts. . . The second point of view is not concerned with the relation to the material of observation but with the premises of the theory itself, with what may briefly but vaguely be characterized as the “naturalness” or “logical simplicity” of the premises (of the basic concepts and of the relations between these which are taken as a basis). This point of view, an exact formulation of which meets with great difficulties, has played an important role in the selection and evaluation of theories since time immemorial.”

    Please let us start welcoming Einstein et al. in our books and blogs. Thanks!

  86. Raoul Ohio Says:

    Dr. Elliot McGucken:

    Excellent points. Your question about if Feynman was a “Shut up and calculate” guy makes me wonder about my own idea of what SUAC means.

    I have never looked it up. I had assumed something like: You can calculate a lot of things with QM. Go ahead and calculate, and don’t worry about esoteric stuff such as what the wave function really is. Keep in mind that the calculations tend to be highly difficult and often require insightful simplifications, advanced math, powerful computers, years of work, and what not.

    With this picture, I assume Feynman is a SUAC guy. Either way, he liked to play drums.

  87. Elliot Temple Says:

    Feynman was not a “shut up and calculate” type. Actually he was a Popperian which is something of the opposite.

    I think some people are confused partly because of some negative comments by Feynman about philosophers. But Popper said similar too! Most philosophers are really bad. Knowing that is one of the steps to being a good philosopher.

    sources on Feynman being a Popperian: his IRL discussion with David Deutsch. Plus his books are a giveaway if you know what to look for, plus Wheeler would introduced Feynman to Popper if Feynman didn’t know yet (and being familiar with Popper plus having a very Popper-compatible worldview is what a Popperian *is*).

  88. Rahul Says:

    Whatever I’ve read about the “negative comments by Feynman about philosophers” they seem entirely justified. His critique was spot on for 99% of the profession or even about the sort of stuff that gets published as Philosophy.

  89. Olav Says:

    Elliott Temple (#87):

    The average level in all the disciplines I have some familiarity with is not that high, in the sense that most practitioners in the field don’t make groundbreaking contributions. The same goes for philosophy. However, I think it’s an exaggeration to say that most philosophers are “really bad.” Most philosophers I know are pretty careful and accurate thinkers, at least.

    Rahul (#88)

    Are you sure you are in a position to make pronouncements about 99% of philosophy? Philosophy is a big and diverse field. I study philosophy, but I don’t have anything like an overview of the field. I don’t even have an overview of my own area of specialization, which is philosophy of science.

  90. Fred Says:

    Nex #9, Scott #23:

    “3. “Yes, I agree with you that because of chaos, the No-Cloning Theorem, or whatever else, human decisions might be fundamentally impossible to predict beyond some fixed accuracy, even in principle. ”

    I was just reading what Feynman had to say about this (http://www.feynmanlectures.info/docroot/III_02.html#Ch2-S6)

    “Of course we must emphasize that classical physics is also indeterminate, in a sense. It is usually thought that this indeterminacy, that we cannot predict the future, is an important quantum-mechanical thing, and this is said to explain the behavior of the mind, feelings of free will, etc. But if the world were classical—if the laws of mechanics were classical—it is not quite obvious that the mind would not feel more or less the same.”

    “It is therefore not fair to say that from the apparent freedom and indeterminacy of the human mind, we should have realized that classical “deterministic” physics could not ever hope to understand it, and to welcome quantum mechanics as a release from a “completely mechanistic” universe. For already in classical mechanics there was indeterminability from a practical point of view.”

    ( I believe Scott covers all this in details in http://arxiv.org/abs/1306.0159 )

  91. Rahul Says:

    Olav #89:

    “Are you sure you are in a position to make pronouncements about 99% of philosophy? Philosophy is a big and diverse field. I study philosophy, but I don’t have anything like an overview of the field.”

    Depends on if you believe in sample surveys. 🙂 Do I have to visit every zoo & forest in the world to know 99% of tigers aren’t white tigers?

    In any case, I could be a horrid judge or my reading choices may suck but when someone of Feynman’s intellect opines on the matter it only strengthens my prior prejudices.

  92. Jay Says:

    Scott #80:

    Couldn’t we say that Shor made a refutable prediction, namely the prediction that one should get convinced that factoring is in BQP after scrutinizing at a certain piece of text which constitutes his proof?

  93. Timothy Gowers Says:

    I completely agree with your view that there is something importantly right about falsifiability but that the idea needs to be expanded. An expansion that I think works well is this: an idea is not very useful if you cannot explain under what circumstances you would abandon it. So for example, I abandon the statement “unicorns exist” not because it has been falsified but because there are very good reasons to suppose that if unicorns existed then I would know about it, and there are historical reasons for the unicorn concept to have come into existence despite there not being any actual examples.

    This approach doesn’t work so well for mathematical statements, but even there one can say, “I would not have accepted this theorem if I had been made aware of a counterexample.” I believe that this makes sense even if the theorem has a proof. There may be no possible world where the theorem is false, but there is an “imaginable if you don’t explore every single logical consequence of everything” world where it is false, and that’s roughly what that hypothetical statement could be said to be referring to.

    I haven’t read through all the comments above, so apologies if I’m simply repeating what someone else has said.

  94. Rahul Says:

    “…an idea is not very useful if you cannot explain under what circumstances you would abandon it.”

    I’d love to hear from String Theory fans the answer to this question.

  95. Fred Says:

    Scott #42
    ““Mathematical Universe Hypothesis” to be devoid of content.”

    I would agree, but I’ve been having some thought experiment about this (sorry for the long rambling).

    “Hypothesis” 1)
    The irreducible essence of reality is our own consciousness, i.e. “I think therefore I am”. The fundamental building block of our reality is the “I”.
    What we label as the “physical” aspect of reality (as opposed to “mathematical”) and that we take for granted as the building block of reality is really a construction of the mind (we do not have direct access to the true nature of elementary particles, fields, etc).

    Imagine I ask you to close your eyes, plug your ears, and think for 10 seconds about someone very dear that you have lost (generating some self-contained strong emotion).
    Let’s say we somehow measure very precisely the initial state of your brain at the start of the experiment, to some reasonably good approximation, good enough so that we could simulate all the neurons in your brain and their evolution for the next 10 seconds with good precision (we can have measured the state of your brain after 10 seconds and compare it with the simulation to see indeed that the final states are arbitrarily similar).
    That simulation is run on a powerful computer – it doesn’t have to be done “real-time” since there is no input/output (the brain was taken as a closed system for the duration of the experiment).

    “Hypothesis” 2)
    This simulation of your brain for 10 seconds is in itself totally equivalent to your consciousness for that amount of time.
    It might not be exactly the same quantitatively (because we can’t predict things for long before chaos takes over and the simulation eventually diverges), but it’s qualitatively the same. I.e. it’s a valid implementation of an irreducible building block of 1). It “feels” just the same way you “felt”.
    This hypothesis can’t be verified (ok, so it’s not science), just like no-one could ever prove to you that they’re conscious in the same way you are (which in a way would tend to validate the fact that it’s a fundamental building block).
    But, assuming the hypothesis is true, at what moment is this alternative consciousness “realized”? Is the question even meaningful? (but we can ask the same question about our own mind in our “wet” brain)
    Is it realized during each iteration of the computation cycle (with a delta time granular enough so that enough states are computed to make it evolve similarly to the real brain, say, one micro-second)?
    Is it realized when the state is somewhat updated in the RAM? When the CPU computes the next state?
    What if the “computation” was done slowly by hand, following rules (themselves possibly part of the state), and writing down each incremental state on sheets of papers?
    When is the consciousness realized? When we write down each new state on paper?
    Since the brain is finite, it has a finite state with a finite number of possible values, in theory we could map each state to a very long integer (therefore it’s not really an algorithm/computation that has to work on an infinite domain, it can be done as a lookup). Each iteration is simply picking the next integer and the whole process is equivalent to extracting a subset of integers. When is the consciousness realized in this case? Each time you “pick” a integer?
    Is the act of picking an integer even necessary to realize the consciousness?
    Do we need to do anything at all?
    Isn’t every possible “realized” consciousness (you, me, the simulation) and its associated “world view” then nothing more and nothing less than a pure mathematical pattern, i.e. a subset of the natural numbers, that always was and always will be?
    This seems to suggest that the possibility of simulating consciousness would be an indication that the essence of reality is mathematical and that physical objects are derived on top of it.
    (okay, I stop here, sorry again for the rambling)

  96. Scott Says:

    Timothy #93: Thanks for your comment, which was very well put! I think my criterion is a special case of yours: if you say that every idea needs to “pull its explanatory weight” in accounting for our observations, then in particular, you’re committing to abandoning your favorite idea once someone can reproduce the same explanatory successes (or lack of successes…) in a simpler way that bypasses the idea.

  97. Scott Says:

    Jay #92:

      Couldn’t we say that Shor made a refutable prediction, namely the prediction that one should get convinced that factoring is in BQP after scrutinizing at a certain piece of text which constitutes his proof?

    We could say it, but (it seems to me) only by stretching the meaning of “refutable prediction” far beyond where it’s useful. We might as well say that anyone who proposes an experiment and predicts its outcome is really “proving a theorem”: namely, the theorem that such-and-such would be the outcome of the experiment if Nature were described by such-and-such mathematical model.

  98. Jay Says:

    Well, it’s not different from saying that the concept of falsifiability needs to be expanded to mathematical activity.

    For the converse idea, unicity would be a big problem.

    http://www.smbc-comics.com/comics/20140104.png

  99. Jay Says:

    Scott #5

    As an example of Kuhnian misunderstanding, maybe your discussion with RJ Lipton about whether factoring was proven in BQP would count. I don’t understand RJ Lipton point enough to be sure, but I had a strong Kuhnian feeling reading this post.

  100. Olav Says:

    Rahul #91

    I believe in sample surveys if they’re unbiased. But philosophy, like other disciplines, has a bunch of sub-communities working in relative isolation. Any sampling you do is likely to be biased towards sub-communities whose work you’ve heard about through friends or acquaintances. For instance, even though I specialize in philosophy of science in one of the main departments for philosophy of science in the US, I’ve never read either Feyerabend or Kuhn.

    And with due respect, unless you’ve invested serious time and effort into philosophy, I don’t really think you are qualified to judge the field even if your sampling of it is fair. The same goes for Feynman, of whom I’m a big fan. Academics have a bad tendency to denigrate other academic fields whose goals and methods they haven’t actually taken the time to try to understand and appreciate (philosophers do this a lot too).

    Of course, it’s possible to invest a lot of time and effort into understanding a field and then at the end conclude that it’s worthless. But if you are in that position, I think the more reasonable and humble thing to say is (in most cases), “that field is not for me,” not “99% of that field is crap.”

  101. Scott Says:

    Jay #99: No, I don’t think that was a Kuhnian misunderstanding, just a regular, ordinary misunderstanding (on his part)! 😉 Indeed, Dick seemed to agree in this followup post.

  102. Jay Says:

    Thanks, but this followup is precisely why I thought this story was very Kuhnian. 🙂

    Look what happened: at the end you’re still in disagrement. On one side you still think he made [just a regular, ordinary misunderstanding], on his part he still think an extended proof would be required [I do agree the comments together have answered the issue, but I would humbly suggest that someone write down a short sketch of the proof so all can see it.].

    So as a Kuhnian one can make the prediction that you will never understand the real reasons for Lipton’s post, nor Lipton will understand why you don’t see the problem he sees.

    Ok, this is not that usefull, and maybe just wrong after all. 🙂

  103. Scott Says:

    Jay #102: But look what happened. On the ground-level issue, Dick Lipton did not remain in perpetual disagreement with the quantum computing community. His paradigm turned out not to be incommensurable with our paradigm. Instead, like the honest scientist he is, Dick changed his mind after a few days of discussion: “well gosh, I guess factoring is in BQP after all! I guess the experts who kept insisting it was in BQP weren’t total doodyheads! sure, most textbook expositions might skip a few standard steps, but those steps can be spelled out, just as the experts said…” 😉

  104. Arun Says:

    I dunno about Shannon’s information theory being not falsifiable; the limit on communications channel capacity is akin to the second law of thermodynamics.

  105. Jay Says:

    In his original post Lipton listed three possible situations:

    1. The proof is correct and we are just confused.
    2. We are right, but. (…) all quantum experts are aware of how to fix it. (…)
    3. (…) There really is a missing argument here.

    Of course what really matters is to exclude 3. But you take his followup blog as he thinks the situation was 1. I take his followup blog as he thinks the situation was 2.

    The difference might seems unimportant, and in a sense it doesn’t matter. But if you want to understand why others don’t automatically accept some proofs (especially the proofs that seems obvious to your eyes), then all Kuhn says is there’s frequently more to say than “they are just confused”. Yes if you wish, but what makes these kind of bright minds confused? Is it possible there are situations I am confused myself for the same reason?

  106. Jay Says:

    Olav #100

    Out of random curiosity, what authors are you required to read in your program?

  107. Olav Says:

    Jay #100

    There are no required authors, really. But if you have an interest in the philosophy of science, I’d say it’s hard to get through this program without being exposed to at least the following people at some point: van Fraassen, Reichenbach, Carnap, Elliott Sober, Judea Pearl, Hilary Putnam… That’s just off the top of my head.

  108. Jay Says:

    Thx Olav.

  109. srp Says:

    Arun #104: What test would you come up with to disprove Shannon’s results? It’s math–it follows from definitions of information, etc. If you thought you’d falsified it in some example it would mean you’d made an accounting mistake somewhere.

    I hesitate to talk about thermodynamics, but my impression is that if you observed heat flowing in the wrong direction that would tend to falsify it. (There is a sense in which the 1st law seems more like a definition than does the 2nd, in that if you saw an apparent repeatable violation of the 1st you could define some new source or sink of energy to put the books back in balance. Vacuum energy or poltergeist energy or whatever.)

  110. Scott Says:

    Jay #105: I’d say there’s only a difference of degree, not of kind, between your situations 1 and 2. It’s extremely common that a proof that’s spelled out in enough detail for one mathematical community is confusing for a different community, and vice versa—but these are questions of good or bad pedagogical practice, not of truth or falsehood. In the case of factoring being in BQP, there was nothing that the experts were confused about. It was really as if a mathematician had written x2-4=0, therefore x is +2 or -2, and Dick and Ken were complaining that the mathematician didn’t first add 4 to both sides.

    Now, am I ever on the other side of this sort of situation? Am I ever the one who’s confused about something that’s obvious to quantum field theorists or set theorists or representation theorists? Constantly! Like, every friggin’ day!

    So, what do I do when that happens? I consult a textbook or Wikipedia, I email friends who are experts, I post on MathOverflow or Physics StackExchange. What I try never to do is to convince myself, or suggest to others, that I might have discovered some gap that’s eluded all the experts for decades—because I’ve learned from experience that that’s essentially never the case. At the least, I don’t think it’s a possibility that one should even contemplate before completing the steps above.

  111. Rahul Says:

    Olav #100:

    I’m still not convinced. With titles(*) like “Personal knowledge: Towards a post-critical philosophy”, “Philosophy in the flesh: The embodied mind and its challenge to western thought”, “Psychosemantics: The problem of meaning in the philosophy of mind” I tend to go with the Feynman school of thought.

    *Top hits on a google scholar search on post 1950 papers with “Philosophy” as keyword.

  112. Olav Says:

    Rahul #111

    Well, you probably know you shouldn’t judge books by their titles. Nonetheless, of the three titles you mention, only the third one is by an academic philosopher. “Philosophy” is a pretty common word, and the fact that a book title contains that word is not an indication that it’s a philosophical work written by an academic philosopher — on the contrary, I would say.

    In the same way, if you go to the philosophy or metaphysics section of your bookstore, most of the books you find will typically be on religion or new age topics, perhaps with a couple of Dan Dennett books thrown in for good measure.

  113. Rahul Says:

    Olav #112:

    As an aside I was perusing the Most Read Articles for Dec 2013 in the British Jnl. for the Philosophy of Sci. (not sure how good or bad it is!).

    (1) Ducks, Rabbits, and Normal Science: Recasting the Kuhn’s-eye View of Popper’s Demarcation of Science

    (2) KUHN AND THE COPERNICAN REVOLUTION

    (3) Kuhn’s Changing Concept of Incommensurability

    (4) KUHN’S SECOND THOUGHTS

    For all your saying, “I’ve never read either Feyerabend or Kuhn.” the field (or at least it’s readers) itself seems pretty preoccupied with Kuhn, eh?

    A followup question: Why are so many Philosophy Papers of the form “What X thought about Y’s theories”?

  114. Rahul Says:

    Sorry, I can’t stop picking on Philosophy! 🙂 Here’s some more great titles:

    (1) “The Role of Explanation in Understanding”

    (2) “Multiple Realizability, Identity Theory, and the Gradual Reorganization Principle”

    (3)”Gestalt-Switching and the Evolutionary Transitions”

    Some of those sound like something Scigen could have generated!

  115. Jay Says:

    Scott, still not convinced you got my point (Kuhnian confusion best applies to the things you think you know -say you think you know what is a mathematical demonstration and you encounter a demonstration you don’t understand as a demonstration), but anyway thanks for the discussion.

  116. Scott Says:

    Rahul #114: It’s far from obvious to me that any of those articles are nonsense (they might be, but I can’t tell from the titles alone). And actually, the relationships among the three concepts “causation,” “explanation,” and “understanding” is something I’ve wondered about myself (though I have no idea whether that article would help me).

    More generally, if you get to dismiss something because the title sounds funny, then you’d probably have to throw away most of science as well (classic demonstration: the snarXiv). Which, come to think of it, you might be more than happy to do…

  117. Rahul Says:

    Scott #114:

    Oh, I’m not saying they are nonsense either (though they could be like you say). I just think they are funny titles.

    I’d never heard of SnarXiv; it’s awesome! Thanks!

    PS. You do seem to think of me as some crusader against Basic Sciences generically, eh? 🙂 I’m not.

    PS2. You don’t think there’s anything amiss with modern philosophy? I think you do (based on your recent interview). Maybe you can articulate what’s amiss better than just my juvenile snark.

  118. Olav Says:

    Rahul,

    BJPS is a great journal. If you want to learn something about what philosophy of science is up to, it’s a good resource (but I urge you to actually read the articles, and not just look at their titles).

    “For all your saying, “I’ve never read either Feyerabend or Kuhn.” the field (or at least it’s readers) itself seems pretty preoccupied with Kuhn, eh?”

    No, I can say pretty confidently that philosophy of science is not preoccupied with Kuhn. The fact that you have the impression that Kuhn is a big figure in philosophy of science tells me that you don’t really know what the field is about. Kuhn is a bigger figure among historians and sociologists of science, and that could explain why the articles you mention have a lot readers. It’s important to realize that BJPS has a lot of readers who are not in philosophy at all.

    If you want a fair impression of what philosophers of science are currently interested in, it would be a much better idea to look at the current issue of BJPS. Or to be even more up to date, you could look at advance access. If you do, these are the titles you find:

    Nina Emery
    Chance, Possibility, and Explanation

    Bradford Skow
    Are There Genuine Physical Explanations of Mathematical Phenomena?

    James Woodward
    Simplicity in the Best Systems Account of Laws of Nature

    Luke Fenton-Glynn and Thomas Kroedel
    Relativity, Quantum Entanglement, Counterfactuals, and Causation

    Bert Leuridan
    The Structure of Scientific Theories, Explanation, and Unification. A Causal–Structural Account

    Foad Dizadji-Bahmani
    The Probability Problem in Everettian Quantum Mechanics Persists

    Leah Henderson
    Bayesianism and Inference to the Best Explanation

    Michael Esfeld, Mario Hubert, Dustin Lazarovici, and Detlef Dürr
    The Ontology of Bohmian Mechanics

    Stephan Torre
    Restricted Diachronic Composition and Special Relativity

    Richard A. Healey
    How Quantum Theory Helps Us Explain

    Arif Ahmed
    Causal Decision Theory and the Fixity of the Past

    Kenny Easwaran
    Why Physics Uses Second Derivatives

    Aidan Lyon
    Why are Normal Distributions Normal?

    OK, I’ll stop there. Of course, you shouldn’t make any judgment about quality based on titles. But notice that not a single one of these articles has to do with Kuhn, nor are any of them of the form, “What X thought about Y’s theories.” The fact of the matter is that you are basing your opinions of philosophy of science on mistaken prejudices.

  119. Rahul Says:

    Olav #118:

    Thanks for the links!

  120. Douglas Knight Says:

    Two things to note about the most-read articles about Kuhn that Rahul found. (1) they date from 1996, 1970, 1993, 1971; (2) all are by philosophers. (When I look at it, a 2012 Kuhn anniversary article has jumped to the front of the list. And the third most popular article isn’t even about Kuhn.)

  121. Rahul Says:

    @Douglas Knight

    To clarify, I didn’t go looking for articles about Kuhn. I just went to the British Jnl. for the Philosophy of Sci. website and browsed their latest list of Most Read Articles (Dec. 2013)

    The number of articles about Kuhn & some quite old too tells me that someone (I don’t know who) is indeed doing a lot of Kuhn reading. Knowing this is a top notch academic journal I assumed this is professional philosophers. But maybe not.

    In any case, it was surprising that a monthly (not all time) list of most read articles had so many decades old ones on the list. Is that common?

  122. Raoul Ohio Says:

    Probably relevant to the “Blackhole Firewall” dustup recently considered in SO is Hawking’s new view (Event horizon problem? What event horizon?):

    http://www.escapistmagazine.com/news/view/131663-Stephen-Hawking-Claims-There-Are-No-Black-Holes

  123. Mayo Says:

    Glad that the author has taken the daring step of actually reading Popper (unlike many who who comment on the Carroll recommendation). Pleased also to see mention of my paper: “(1) Ducks, Rabbits, and Normal Science: Recasting the Kuhn’s-eye View of Popper’s Demarcation of Science”.

    So may I mention my “Error and the Growth of Experimental Knowledge” (1996), (Lakatos Prize) which combines a nuanced view of Popper, Peirce, Kuhn, Lakatos…and the error statistical philosophy of statistics? errorstatistics.com

  124. Rahul Says:

    @Mayo #123:

    My rare chance at asking a real philosopher 🙂 : What’s with the profession’s obsession with writing papers of the generic form: “Interpreting what X thought about the theory of Y.”

  125. Or Meir Says:

    Hi Scott,
    Sorry for the late reply. Can I give examples of Kuhnian principles at work in complexity theory? I believe I can.

    Let’s start by recalling Kuhn’s description of science and paradigm shift: according to Kuhn, science works in two phases: “normal science” and “paradigm shift”. In the phase of normal science, scientists are focused on solving problem, where the current paradigm determines which problems are interesting, and what solutions are acceptable.
    When a paradigm shift occurs, it’s not because of a rational reason, but because scientists feel that the current paradigm is “stuck”, that is, they can not use it to solve problems any more. Then they switch to another paradigm that allows them to solve new problems.

    Now, can we give examples of such principles? Here are some examples that come to my mind:

    1. Why did we move from studying automatons, grammars, etc., to studying complexity theory? Is it due to a rational reason that can be elaborated? I would say that this transition is an example of a paradigm shift.

    2. Why do more people study hardness-of-approximation and AGT than circuit lower-bounds? Is it because the former are rationally more justified, or is it just because the former areas have more “approachable” problems?

    3. What made the definition of “polynomial time” become an acceptable measure of efficiency? Was it a rational decision, or merely the fact that people felt it will be easier to solve problems this way?

    Similar questions can be asked about other common definitions: can Nash equilibrium be rationally justified as a valid “solution concept” in AGT? I doubt that there is any “good” reason to think that people will play a Nash equilibrium in the real life. But Nash equilibrium is a simple and elegant concept that people can write papers about, regardless of its relevance to reality.

    4. In general, what determines what questions and areas are popular and considered “interesting”? Can those decisions be justified rationally, or does it have to do with what areas seem approachable and allow publishing papers?

    That said, mathematics and TCS are not a good example for any discussion about philosophy of science, because they don’t make experiments, and technically they do not study the “real world” but imaginary worlds that are constructed from axioms. Looking at sciences that try to describe the real world, such as physics and biology, is more illuminating.

    Ask yourself, for example, why do biologists and physicists have somewhat different standards of what constitutes a legitimate experiment. Is this difference rationally justified, or is it just a social norm?

    Going a bit further away from “hardcore” science to economics: can the use of mathematical models of markets be justified rationally? Many important economists, such as Keynes, did not think so, but today it is the common norm. You can say that here, too, we have an example for an irrational “paradigm shift”.

  126. Or Meir Says:

    I forgot what is maybe the best example for Kuhnian principles in TCS:

    What made people be excited about NP-completeness in the early 70s? Probably not its inherent interest, because people did not care much about the original paper of Cook.

    What made the subject popular is the subsequent paper of Karp, which showed how the notion of NP-completeness can be used to start an industry of papers proving NP-completeness of problems.

    That is, people started studying NP-completeness because they saw it allows them to publish papers. This is exactly how Kuhn describes the way scientist move to a new paradigm – they do it in order to solve more problems.

  127. Question? Says:

    “When a paradigm shift occurs, it’s not because of a rational reason . . . . they switch to another paradigm that allows them to solve new problems.”

    OK, what’s not rational about that?

  128. Scott Says:

    Or: What you say about the history of complexity theory sounds reasonable, EXCEPT for your gratuitous use of the epithet “irrational.” What could be more rational than switching to a new paradigm because the old one no longer lets you productively solve problems? That’s not a matter of arbitrary whim, like postmodernist theories or skirt lengths: it’s more like upgrading to a new computer because the old one no longer does what you want.

    (Sorry, just saw that I crossed comments with someone else making the same point…)

  129. Douglas Knight Says:

    It seems to me that Or’s first comment is not very clear, but the second answers the objections of Scott and ?Question by explaining that “solve a problem” really means “write a paper” and that the point is that what constitutes a “problem” and “solution” for the purpose of publishing a paper are socially defined and that it is not at all clear that they are rational.

  130. Question? Says:

    Douglas #129

    Do you then subscribe to the view that what most scientists are primarily concerned with is merely the act of publishing papers, and that they are not primarily motivated to answer the questions or solve the problems actually addressed in the papers? Fair enough, but that’s not proof of Kuhn’s thesis. Rather it’s just assuming the conclusion.

  131. Raoul Ohio Says:

    I do not know know exactly what Kuhn means by “paradigm shift”, but will make a couple remarks about to points Or listed:

    1. automatons, grammars, etc., Vs studying complexity theory:

    These both fall in the category of theoretical background for CS. There was NO shift, both are still mainstream. My guess is that most CS people regard “automatons, grammars, etc.” to be painfully boring, but important if you want to really know what is going on at a low level.

    2. More people study “hardness-of-approximation (HOP) and AGT Vs. circuit lower-bounds”: You study what is interesting, fashionable, and you can make some headway in. HOP is the practical end of complexity theory.

    3. “polynomial time” (PT) an acceptable measure of efficiency:

    PT is totally NOT a practical measure of anything, but the stone cold obvious theoretical boundary for algorithms. The situation is somewhat similar to the notion of a “C-infinity” function in advanced calculus. While it is difficult to say if any real world function is (or even could be) really C-infinity, it is an obvious idealization. In both cases, it provides a simple criteria for proving theorems.

    5. “NP-completeness in the early 70s?”:

    Whenever a powerful approach is (discovered, developed, recognized, …), everyone tries it out on other problems to see how far it can be pushed. Sometimes it is a flash in the pan, sometimes it becomes a standard tool.

    Do any of these rate as a paradigm shift? You decide.

    While I don’t see any paradigm shifts in CS that “changed everything”, I can think of plenty of things that made a major addition to the way people thought before:

    For programming: “stored program concept”, functional programming, object oriented programming, recursive function calls.

    Algorithms and Data Structures: Randomized A+DS, FFT, dynamic programming, linear programming.

    Re Douglas Knights remark: There are many problems with the relation between research, and writing a paper. But it is the system we have, and it kind of works. The “Letters” portion of “Physics Today” has debated this issue in depth for decades.

  132. Douglas Knight Says:

    ?Question, Or was elaborating on his earlier “Kuhn has contributed a lot more to my understanding of science than Popper.”

  133. Or Meir Says:

    Hi,
    A few clarifications and additions:

    1. It’s been more than ten years since I read Kuhn, so everything I say about his philosophy should be taken with a grain of salt.

    2. Regarding the use of the word “rational”: Indeed, after I wrote my comments I realized that the words “rational” and “irrational” are not really appropriate, and are also not the way Kuhn describe those shifts.
    If I recall correctly, what Kuhn says is that the reason for shifting from one paradigm to another *are not justified by scientific means* – that is, are not justified by the scientific method itself (say, by experiments).
    The paradigm shift might be justifiable, however, by common sense or arguments about beauty. Nevertheless those are “non-scientific” reasons, since they themselves can not be supported by science.

    3. The notion of “paradigm” is a bit vague, as Kuhn himself admitted, but one of Kuhn’s definitions can be described as follows: a paradigm is a social norm that determines:

    a. what questions are considered interesting?

    b. what should be considered as legitimate solution to a question?

    4. Another example that comes to my mind about a paradigm shift in mathematics is the shift to the rigorous formalism we use today and that was not commonly used before the 19th century. This is an example of a paradigm shift that changes the social norm of “what is considered to be a legitimate solution?”.
    Note that indeed, there is no way to justify “scientifically” that using such rigor is inherently better, and indeed, the physicists do not seem convinced.

    5. All in all, I don’t think that Kuhn’s philosophy is right in everything, or that it is a complete description of science. All I said is that it contributed a lot more to my understanding of how science works than Popper’s philosophy (which I also appreciate a lot).
    In particular, Kuhn’s philosophy has drawn my attention to the extent to which science is a social enterprise, and as such, is influenced a lot by social norms. In particular, what scientists choose to study, and what they consider as a legitimate argument, is determined by a social norm more than by an “objective” and “rational” method.

  134. Olav Says:

    Or Meir #126 and Raoul Ohio #132,

    Your discussion of polynomial time as a reasonable measure of efficiency (or as the theoretical boundary for usable algorithms) brings to mind a few questions I’ve been pondering for a while. I apologize if these questions are confused novice questions, but I haven’t really seen them addressed in textbooks.

    (1) Isn’t the reasonableness of PT a great example of an empirically justified thesis? That is, most practical algorithms that are polynomial-time are (it seems to me) bounded by polynomials with low constant terms and low exponents. And it’s the fact that practical PT algorithms are “reasonable” that makes PT a reasonable theoretical measure too (or a reasonable bound)

    Suppose, on the contrary, that it had turned out that all or a majority of practical PT algorithms (division, linear programming,etc.) were asymptotically bounded by polynomials with huge exponents or constants. Would PT still have been a reasonable theoretical measure of efficiency?

    Or suppose it turned out that there was an algorithm solving 3SAT that was strictly speaking not PT, but that was asymptotically bounded by p+e, where p is a polynomial and e is an exponential term so tiny that it would never matter for any conceivable practical purpose. Would PT still be a reasonable theoretical boundary?

    (2) As I said earlier, most practical algorithms in P seem to have small constants and exponents, and for some reason it seems far-fetched to think that 3SAT should be solvable by an algorithm of the sort that I described in the previous paragraph (i.e. an algorithm that was roughly polynomial for all practical purposes, but non-polynomial in the limit). Is it just luck that the complexity world is well-behaved in this way? Or is there some deeper theoretical explanation for why the distinction between P and NP also works as a practical boundary?

  135. Olav Says:

    “Or is there some deeper theoretical explanation for why the distinction between P and NP also works as a practical boundary?”

    Sorry, I mean PT and NPT.

  136. Or Meir Says:

    Olav, the distinction between P and NP does not work as a practical boundary. The main reason is that those are worst-case definitions, while people in the real world care about average-case running time. For example, we strongly believe that satisfiability is not in P, but in practice, satisfiability is solved all the time, even on inputs that have tens of thousands of variables.

    As for why most polynomial-time algorithms have a small exponent, it could also be explained by us not being smart enough to come up with algorithms that utilize the “ability” to run in polynomial time that has large exponent.

  137. Scott Says:

    Or #133: I don’t even know what “the scientific method” means! So, if someone told me that (to take an example discussed above) the shift of most of the TCS community from formal grammars over to algorithms and complexity theory wasn’t justified by “the scientific method,” but “merely” by researchers’ desire to move from a field that had mostly been exhausted over to one that’s full of exciting open problems, then I’d shrug my shoulders and say that that sounds like a perfectly good justification to me.

    The point where I get livid is the point where people want to use philosophy of science, not to understand science, but only to “unmask the pretensions” of those so-called scientists who use so-called rationality to make their so-called progress, but now we know so much better, and know that they’re no more making “progress” than medieval scholastics arguing over angels on the head of a pin.

    I think one of the main reasons Kuhn remains so popular is that he wrote in a way that seems to support the beliefs of the postmodernist pretension-unmaskers—and then he refused to disavow the craziest interpretations of what he said (for example, he refused to say that paradigm shifts in the hard sciences have typically happened for very rational reasons).

    Some subset of philosophers has been at this game, of “unmasking the pretensions of science,” for hundreds of years—but the amazing thing to me is that the entire time, science has just continued to make progress, change the world, solve what were previously recognized as open problems, generate new problems that even hadn’t occurred to anyone before, etc. No other human endeavor has ever made this kind of consistent progress, and indeed the progress has changed philosophy as philosophy itself rarely changes philosophy. So how about unmasking the pretensions of the pretension-unmaskers? 🙂

  138. Jay Says:

    Scott #137

    > I think (…) Kuhn (…) wrote in a way that seems to support the beliefs of the postmodernist pretension-unmaskers

    I’m confused. Are you saying Kuhn himself was intentionnally trying to support some cranks, or are you saying some cranks happened to misinterpret his writings?

    If it’s the first, would you mind to cite one sentence or two of his actual writings for which you think this criticism apply?

    If it’s the second… duh… by this standard there’s little we wouldn’t dismiss at first sight! Think of your own writing about free will… Would you think fair if one dismiss what you wrote not because of what you wrote, but because some crank can misinterpret it?

  139. Question? Says:

    “Are you saying Kuhn himself was internationally trying to support some cranks, or are you saying some cranks happened to misinterpret his writings?”

    Why are those the choices? Perhaps he wasn’t at all trying to support some cranks and they in turn interpreted his writing relatively correctly?

  140. Jay Says:

    #139
    …because that’s the two ways I can understand his delicate choice of words. Maybe something else was meant, but I doubt “wrote in a way that seems to” is consistent with your last interpretation.

  141. Question? Says:

    Jay,

    I disagree. In fact, I think it’s likely that Kuhn wrote as he did because he actually believed what he was saying. That doesn’t stop what he wrote from being sloppy and wrong and being understood by cranks and post-modernists generally much in the way it was intended.

  142. Raoul Ohio Says:

    BTW, (IMHO) the key theoretical aspect of P is that it is closed under sums, products, and composition. That makes everything work.

    My long standing question is: in what sense does being in P make something practical?

    Olav #130 says “… polynomial-time are (it seems to me) bounded by polynomials with low constant terms and low exponents …”. This point often comes up, and I think it is a key point. It seems to be sort of a “folk theorem”. But I am skeptical of everything, because that is usually the smart bet.

    Here are a couple comments and questions.

    1. Can some clever construction make a problem in P of arbitrarily high polynomial degree that cannot be reduced?

    1A. If so, can such a problem be constructed for any “useful” applications?

    2. What is the highest big Theta or Big Omega known for any useful problem? I seem to recall seeing n^3 log n somewhere. I am a duffer in this landscape, so I suspect higher levels.

    3. It seems there might be some way to organize known problems at different levels with reductions, etc., that might be useful.

    3A. Likewise, some organizing of the reductions used in proving problems NP-Complete. All of NP-C is equivalent modulo P; can it be stated what reduces to what at level n^1 or n^2, for example? Would this be useful in problem solving?

  143. Or Meir Says:

    Rahul #142: the classic time-hierarchy theorem shows that you can construct a problem of any polynomial-time complexity. Specifically, consider the following problem: does the machine M stop within n^k steps?
    A simple diagonalization argument shows that you can solve it much faster than n^k.

  144. Or Meir Says:

    Scott #137:
    As I said, mathematics and TCS are not good examples for Kuhn’s philosophy, since they do not try to describe the real world. Hence, any choice of problems that are “interesting” is legitimate.

    The situation is different when you consider sciences that try to describe the real world, e.g., physics. For such sciences, one could think that the only “legitimate” reason to switch theories and paradigms is that our understanding of the real world was changed, e.g., we got new evidence. This is what is meant by “the scientific method”.
    In particular, one could think that it is not “legitimate” to switch theories just because “this theory is more elegant” or “this theory has more interesting problems”.

    However, what Kuhn argues in “the structure of scientific revolutions”, with some interesting historical examples, is that scientists may switch theories and paradigms based solely on reasons of the above kind.
    One such example is the switch from Ptholmaic astronomy to Copernican astronomy. He argues that when that switch occurred, the Copernican astronomy actually did *worse* than the Ptholmaic astronomy in predicting the movement of the stars. Hence, the switch did not occur because of scientific evidence. It happened solely because astronomers felt that Ptholmaic astronomy is stuck and liked the Copernican astronomy better.
    You may say that this is legitimate consideration, but it definitely contradicts the naive view of science as being objective and based on evidence – here it was based solely on the subjective aesthetic taste of the astronomers.

    Moreover, Kuhn argues that scientists might refrain to abandon theories that are elegant or have interesting problems despite data that contradict the theory.
    Specifically, he points out that there is no clear distinction between “data that refute the theory” and “data that pose an interesting research problem that needs to be solved within the current theory”.
    He argues that given evidence that contradicts the current theory, scientists will decide whether to treat it as “refutation” or as a “research problem” based on their subjective feeling: if they feel the current theory is “stuck”, they might treat it as “refutation”. Otherwise, if they like the current theory, the will just mark it as an “interesting research problem” and keep going.
    He also clarifies that this habit of scientists is justifiable in many cases. He gives as an example an historical event in which the Newtonian theory erred in predicting the movement of stars. According to a Popperian approach, the scientists should have discarded Newton’s theory. However, the scientists decided to maintain Newton’s theory and treat this error as a research problem, and indeed, after a while the error was explained by the discovery of a new planet.

    What those arguments try to show is not a vulgar claim of the kind of “scientists are pretenders”. However, they do rule out the opposite naive and vulgar claim that “science is based solely on evidence and objective criterions”.
    Instead, those arguments demonstrate that science is very much based on subjective judgements of scientists. If we agreed to that, then it would be weird to claim that such judgements are not subject to normal social processes such as trends and peer pressure.

    Kuhn’s philosophy also says something about the “progress” of science, and explicitly challenges the scientific claims to “progress”. He concedes, of course, that science steadily improves in predicting natural phenomena, but this is not the same as saying that science makes progress in understanding the universe.
    I don’t remember much of his arguments on this point, so I won’t try to explain it. Instead, let me try to give an argument of my own to illustrate the point:
    Imagine that we fired all the physicists, and instead, hired machine-learning people in order to predict natural phenomena.
    Imagine then that in every year, those machine-learning would run their algorithms on more powerful computers, each time obtaining new SVMs and neural networks that predict the natural phenomena better and better.
    In such a scenario, we would make constant progress in predicting natural phenomena. However, you would probably agree that in such a case we won’t make much progress in understanding the universe.

    To conclude, if you would like to argue that science makes real progress in understanding the universe, you will need more than just pointing out that science improves in predicting natural phenomena.

  145. Or Meir Says:

    One more thing about my last point: It is interesting to ask what is the difference between science as practiced today and my hypothetical machine-learning scenario. Is there really a difference? or are our scientific theories just “human-friendly” SVMs and neural networks?

    If the latter is the case, can we really claim to make progress in understanding the universe?

  146. Scott Says:

    Or, thanks for clarifying your (and Kuhn’s?) views. Three quick responses:

    1. I agree with David Deutsch that the point of science is not prediction, but understanding and explanation. So I’m not troubled in the slightest by scientists abandoning a paradigm that currently yields better predictions for one that currently yields worse predictions—if there are rational arguments (for example, from logical coherence or simplicity) that the latter paradigm will ultimately lead to the truer explanation. Of course, ultimately I’d like to see this judgment call verified by the new paradigm leading to better predictions as well.

    2. I’m also not troubled by your interesting and relevant example of the machine-learning/SVM researchers versus the scientists. And the reason is this: I predict that, if the machine-learning researchers wanted sufficiently good models—models that worked not just for one phenomenon but across a wide range of phenomena, and crucially, not just in situations “drawn from the same sample distribution” as situations seen before, but also in novel or hypothetical situations—then they’d be forced, in practice, to converge on science anyway. To put it another way: curve-fitting and regression are perfectly good tools of science—science already encompasses them—but they’re not particularly effective if they’re the only tools you have. They can very quickly lead you into a local optimum, whereas someone seeking explanation and understanding can get unstuck and find a better optimum.

    3. Given my view in point 2, I’d say that yes, science can really claim to make progress in understanding the universe. That one’s just a no-brainer! 🙂 Even the machine-learning people would be forced to make progress in understanding the universe, if they really wanted good enough models.

  147. Or Meir Says:

    So if I understand correctly, you believe that the only way to improve prediction is to understand the world better. I sympathize with this belief, but the point is that it is a belief, not a fact, and it can be contested.
    In particular, people could legitimately disagree with the claim that science makes progress in understanding the world. Of course, no one can deny the progress in making predictions.

  148. Michael Bacon Says:

    Or,

    “So if I understand correctly, you believe that the only way to improve prediction is to understand the world better.”

    Did Scott say this? I think you can improve predictions a lot of different ways. I remember a puzzle that I once figured out through trail and error over several hours. Only later did I learn that there was a simple formula for reaching the same result. Or, you can just get lucky — something works, and you just keep repeating it. But I would say, like with my puzzle, while explanation wasn’t the only way, it was surely the best.

    “In particular, people could legitimately disagree with the claim that science makes progress in understanding the world.”

    Please explain to me how someone could legitimately disagree with the claim that science makes progress in understanding the world.

    Call me simple, but even the most cursory review of history seems to show progress driven by discoveries and “explanations” that science provides. And this process only seems to be evolving ever more rapidly.

    You want to play with words and say that all of this was due to better “predictions” and not to real understanding of the objective world? Your free to do so, just as others are free to incorrectly claim that there really isn’t any “objective” world in the first place.

    Nevertheless, all of these words mean little when you fall and an objective rock hits your objective head causing objective bleeding and demonstrating the scientific understanding of the world that your mother taught you regarding watching were you are going. 😉

  149. Scott Says:

    Or #147: Well, yes, the belief that the best way to succeed even by the “vulgar” metrics (e.g., more accurate prediction and control) is to understand the universe better, is not a belief that I’m certain could be justified a priori. I’d say the belief is “merely” borne out by 400 years of spectacular success at prediction and understanding going hand in hand with one another. Most of the “science studies” folks (i.e., the Kuhn-emboldened radical skeptics about scientific progress) seem to want to treat the actual success of science as outside the bounds of legitimate argument—as if, before engaging in these deep philosophical debates about incommensurability and paradigms, we need to mentally rewind the clock to 1400 AD, when it wasn’t yet obvious whether science or magic would work better. I see no reason why the skeptics should be granted that privilege.

    (Just saw that I crossed paths with Michael Bacon, making some of the same points…)

  150. Jay Says:

    Scott #149

    >the Kuhn-emboldened radical skeptics

    Who are you talking about?

    Or meir #143

    Small typo? “that you can[‘t] solve it “

  151. Scott Says:

    Jay #138 and #150: If you read Fashionable Nonsense by Sokal and Bricmont, you’ll find a detailed recounting of who the “Kuhn-emboldened radical skeptics” are and how they became emboldened.

    I obviously don’t take the absurd position that Kuhn is responsible for misunderstandings of his ideas by other people. My claim, rather, is that Kuhn himself defended three radical, wrong ideas that might as well have been deliberately designed to provide ammunition to the “science is just another narrative” ignoramuses.

    The first wrong thesis is “incommensurability,” which says that one paradigm can’t even be meaningfully compared to another one, nor can people trained in the new paradigm understand the old one. If you consider examples like Newtonian mechanics vs. GR, this is just patently absurd: physicists who know GR also know Newtonian mechanics, and know how to derive it as a limiting case of GR.

    The second wrong thesis is that, when trying to explain episodes in the history of science (e.g., why theory A won out over theory B), it’s never permissible to make reference to current knowledge of what’s actually true. For me, this is just an arbitrary limitation on what kinds of thoughts we’re allowed to have (like radical finitism in mathematics), and I always rebel against such limitations. If we want to understand the arguments between Darwin and his contemporaries about how inheritance could possibly work, why shouldn’t we exploit the fact that we now know how it does work? It’s like saying that, if a detective is trying to retrace the steps that a heroin-addled murderer took through the woods at 3AM, the detective also needs to work at 3AM and be heroin-addled.

    But the absolute worst Kuhnian thesis—or the best, from the “science studiers'” perspective—is that there’s no sense whatsoever in which successive paradigms get closer to the truth. To me, this could only possibly seem like a sane thing to say as long as you don’t consider any specific examples. “Wait, are you literally saying that Newton’s physics wasn’t truer than Aristotle’s? That Einstein’s wasn’t truer than Newton’s? And if you’re not saying that, then what are you saying?”

    The Kuhnian theses can actually sound semi-plausible, as long as you focus only on stuff that happened before the 1600s. Pre-Galileo, it basically was one narrative versus another, with no obvious trend toward truth. But 400 years ago, something extremely important happened—a paradigm shift, if you will—and after that, the hard sciences really did get on a nearly-monotonic trajectory away from error. This seems to me like a central fact of human history that Kuhn didn’t want to acknowledge, and his science-studies admirers even less so.

    (One might say that Kuhn was like the Steven Jay Gould of the history of science: he correctly noticed that scientific progress doesn’t happen at a uniform rate but is “bursty,” but then erred in drawing massively-grander conclusions from that little observation about speed.)

    If I got something wrong—e.g., if Kuhn didn’t actually hold the crazy positions I attributed to him, or they’re somehow less crazy than I said, or they didn’t inspire a whole generation of aggressively-ignorant “science studiers”—then let me know where I erred.

  152. srp Says:

    Scott’s stance reminds me of Bacon’s point in The New Organon that centuries of philosophical discussion hadn’t advanced understanding of anything very much–the conversation was still going around in circles. Nor had politics, or art, or anything else noticeably improved, with one exception–machines. So the lowly mechanics, working largely by experimental methods, had managed to find create the only progressive area of civilization, and philosophers ought to emulate the mechanics.

    I remember my professor in college commenting that the status inversion Bacon advanced would be like somebody today arguing that everyone should be emulating janitors or garbagemen.

  153. srp Says:

    Scott #151: Kuhn’s second thesis is not at all like finitism in math. It is a basic proposition of causality–people could not have been influenced by things that they didn’t know. It would be terrible anachronistic historiographical practice to read today’s knowledge backward, like talking about what a fool Napoleon was for not using a B-52 at Waterloo.

  154. Scott Says:

    srp #153: No, that’s the wrong analogy. It’s forehead-bangingly obvious that we shouldn’t (e.g.) pretend that Darwin knew about DNA, but that sort of doofus anachronism isn’t what’s at issue here. The claim that I’m making, and that Kuhn rejected, is that we can get a whole new level of insight about Darwin by leveraging the fact that we know about DNA, or about Newton by leveraging the fact that we know about relativity. When I’m grading tests, yes, I want to get inside the students’ minds, to really understand how they came to make this or that error (and therefore how much partial credit they should get). But I also want to know what the right answer is!

  155. srp Says:

    Scott #154: But you were explicitly talking about explaining why theory A won out over theory B. If winning occurred at time T, it must have been for reasons known at the time.

    Trying to understand what you might be getting at, is it that we can understand why the experiments that convinced people back at T came out the way they did because we have better understanding here in the enlightened age of T+tau? So you could say that natural selection won out over Lamarckianism “because” heritability is through genes and that T+tau fact causes the data at T that disproved inheritance of acquired characteristics. I get that, but I’m not sure that Kuhn would have disagreed. Haven’t read much Kuhn, though.

  156. Scott Says:

    srp #155: Yes, that’s exactly what I’m saying. What happened at time T can only have been caused by things that happened at earlier times, but many of those earlier causes might only have been understood at later times. Kuhn, unless I’m mistaken, explicitly rejected the idea that you can use your later understanding of earlier causes: for example, he thought you had to read Aristotle’s physics forgetting everything you knew about Newtonian physics, never invoking the latter to explain why Aristotle observed what he did and how he came to make the errors he made (for example, by ignoring friction). And that’s one place where I think Kuhn was dead wrong.

  157. Douglas Knight Says:

    Scott, I see people fail to do what you call “forehead-bangingly obvious” all the time, so (1) you’re wrong that it’s obvious; and (2) it seems like a pretty reasonable interpretation of Kuhn that he’s suggesting it. Indeed, in rejecting an interpretation as obvious advice, it appears that you are making exactly that error.

  158. Jay Says:

    Scott #151

    [Fashionable Nonsense] Thanks! Now I think I get what you meant. My misunderstanding was cultural. I was trained in France, where post-modernism as you know in America never gained detectable influence. Surprizingly wikipedia indicates that the inspiration for post-moderns was a so call french theory, actually a series of french philosophers and figures who disagree on many aspects. To my knowledge none turned their thoughts as a weapon against science, except Lacan and his followers who had (and still have) huge and execrable influence in psychiatry and clinical psychology. I’m just not used to see Lacan as a postmodernist. I’m used to see him as a crank, period.

    But let’s go back to Kuhn. Yeah, sorry but I indeed think you erred for all these three thesis:

    [Thesis 1]: Kuhn didn’t hold such a crude position on incommensurability, no more no less Popper hold a crude position on falsifiability. Of course one can in principle understand two different paradigms, once he realized that (some of) the concepts are actually differents.

    [Thesis 2]: as srp said, if the point is to understand why we switched from Lamarck to Darwin, we cannot say “because DNA will be discovered later”. I guess your point is actually different from what you wrote in #151, i.e. that what you meant is we can extend Darwin’s ideas once we know DNA. Of course, and that’s why we switched from natural selection to neodarwinism. Why do you think Kuhn objected such moves?

    [Thesis 3]: yes Kuhn denied there is such things as an absolute TOE, because he thinks we progress toward theories that helps us understand our environnement. In a way that’s a no brainer, especially for a mathematician: don’t you think we will only examine a subset of all mathematically coherent theories?

    But more important, this opinion is absolutly not the denial of the scientific progress you think it is. Allow me to extensively quote from: http://plato.stanford.edu/entries/thomas-kuhn/

    “Kuhn does briefly mention that extra-scientific factors might help decide the outcome of a scientific revolution—the nationalities and personalities of leading protagonists, for example (1962/1970a, 152–3). This suggestion grew in the hands of some sociologists and historians of science into the thesis that the outcome of a scientific revolution, indeed of any step in the development of science, is always determined by socio-political factors. Kuhn himself repudiated such ideas and his work makes it clear that the factors determining the outcome of a scientific dispute, particularly in modern science, are almost always to be found within science, specifically in connexion with the puzzle-solving power of the competing ideas.
    Kuhn states that science does progress, even through revolutions (1962/1970a, 160ff.).
    (…)
    The revolutionary search for a replacement paradigm is driven by the failure of the existing paradigm to solve certain important anomalies. Any replacement paradigm had better solve the majority of those puzzles, or it will not be worth adopting in place of the existing paradigm.
    (…)
    Hence we can say that revolutions do bring with them an overall increase in puzzle-solving power
    (…)
    science improves by allowing its theories to evolve in response to puzzles and progress is measured by its success in solving those puzzles”

  159. Raoul Ohio Says:

    Now and then: I think srp’s view is correct IF one is talking about the history of theory A and theory B. And Scott’s view is correct for comparing these theories now. This is the difference between Science and The History of Science.

    Or; in #143 you answered one of my questions (Can a clever construction create a problem of any desired time complexity class?) affirmatively.

    Do you (or anyone else) know anything about the other question: Have any “useful” problems been identified with work(n) = Omega(n^d) for large d? Is d = 10 or 20 out of the question?

    If forced to define useful, my first stab might be: A problem from an application that existed before the algorithmic analysis was done. Or, maybe a problem with independent interest OTHER than as a result in complexity theory.

    Perhaps something like numerical solutions (with mesh spacing h = L/n) to a PDE in R^d should be considered trivial. It would be more interesting to see a “high d” problem without the d being built in.

  160. Scott Says:

    Jay #158: If you’re right that “postmodernism never gained detectable influence” in France, then that’s a candidate for the greatest irony in the history of the universe. In the US, postmodernism is considered a French import, and it was a nasty disease that destroyed many humanities and social science departments, turning them into opponents of science, clear thinking and writing, and the very idea of objective reality. (Fortunately, the hard sciences were mostly spared.) Most of the theorists who Sokal and Bricmont discuss are French (Lacan, Derrida, Kristeva, Irigaray, and I forget who else), and in fact their book was published in France (as Impostures Intellectuelles) before it was published in the US.

    Regarding Darwin and DNA, I’m confused about why my point seems so hard to understand. Suppose some people are searching a house for a hidden object, and failing to find it. Then in order for me to understand their failure, I’d like to know myself where the object is hidden! Yes, the people themselves don’t know where it’s hidden, and I know that they don’t know—but in order for me to know why they don’t know, I want to know. After Origin of Species was published, Darwin was severely criticized because it seemed like sexually recombining traits could only make all the traits closer and closer to an average, and Darwin had a huge amount of trouble responding to that objection. Today, we know why he had so much trouble: because the discrete, combinatorial character of DNA and of mutations would’ve been hard for him to imagine. Of course, Mendel hit on the truth by pure trial-and-error, but sadly his paper got lost in Darwin’s pile. (Oops, there I go again, talking naively and unapologetically about “the truth”! 😀 )

    Finally, before posting my earlier comments, I actually read that same Stanford Encyclopedia on Philosophy article on Kuhn that you quoted from extensively, because I feared that maybe I misjudged Kuhn. Instead the article only confirmed my worst suspicions about Kuhn’s role—partly but not entirely unwitting!—in fueling the postmodern “science-is-just-another-narrative” disease.

  161. Jay Says:

    1) Indeed! 😉

    “This American intellectual movement and the influence of these French authors in the United States were almost unknown in France when was published in October 1997, intellectual Impostures, Alan Sokal and Jean Bricmont”

    (translated from http://fr.wikipedia.org/wiki/French_Theory)

    Notice Sokal’s critics apply very well to French psychiatry too, so it was helpfull there too.

    2) Kuhn wants to figure out why scientists sometime switch from cooking one kind of diner to cooking another kind of diner, and argue it’s for (almost) entirely rational reason, e.g. it just taste better. You want to know why the old plates were not according to the true recipes in the first place, and criticize Kuhn for not making any use of actual truth in his account of how science change. Well, he just had no use of this metaphysical concept. You agree this is a metaphysical concept, don’t you?

    3) you’ve read that “Kuhn himself repudiated such ideas” and that confirmed your worst suspicions about Kuhn’s role in such ideas. Well… let’s agree to disagree.

  162. Jay Says:

    Raoul Ohio #159

    To my knowledge there are no proven lower bounds that would fit your question, but we do have at least two polynomial time algorithms for which either c or d can be very large (to find a minor in a graph and to approximate the volume of a convex, respectively).

    http://en.wikipedia.org/wiki/Graph_minor
    http://en.wikipedia.org/wiki/Polynomial-time_algorithm_for_approximating_the_volume_of_convex_bodies

  163. Scott Says:

    Jay #161:

    1) Thanks for that translation! I strongly recommend Sokal and Bricmont’s book—I regard it as one of the all-time classics of debunking, alongside Martin Gardner’s Fads and Fallacies in the Name of Science and Carl Sagan’s The Demon-Haunted World. But to whet your appetite before you read it, try the hilarious review by Richard Dawkins.

    2) No, I don’t agree that the actual truth about (say) DNA or gravitation is a “metaphysical concept,” any more so than the truth about the history of science (i.e., which scientists said what for which reasons) is a “metaphysical concept.” If we’re unwilling to be realists about the physical world (!), then we certainly shouldn’t be realists about history: e.g., we should say that Kuhn’s reading of Aristotle is just another “narrative,” neither truer nor falser than anyone else’s reading—and that “the truth about Aristotle” is an incoherent, metaphysical concept. To whatever extent the science-studies people accept that there are actual, knowable, objective truths about what Aristotle wrote and thought, to that extent I’d say they’re hypocrites.

    3) No, the impression I got from the SEP entry on Kuhn was of a man who made cringingly-wrong statements, then protested when other people (the total relativists about science) enthusiastically carried his wrong statements to their even more cringingly-wrong logical conclusions. In criminal law, if you’re trying to hurt someone and end up killing them by accident, then you’re not guilty of murder, but you are guilty of manslaughter, which is a worse charge than ordinary assault. If you’ll pardon the metaphor, that’s sort of how I feel about Kuhn… 😉

  164. Scott Says:

    Raoul Ohio #142: Sorry for the delay in replying. Here are the answers to your questions:

      1. Can some clever construction make a problem in P of arbitrarily high polynomial degree that cannot be reduced?

    Yes, that’s the Time Hierarchy Theorem.

      1A. If so, can such a problem be constructed for any “useful” applications?

    I believe there are certain pebbling games that are known to require nk time, when played with k pebbles on a graph of size n. But that doesn’t seem particularly “useful”! 🙂 And also, of course, if you wanted an n10000 complexity that way, you’d have to specify in advance that you were playing with 10000 pebbles, which seems like a bit of a cheat.

      2. What is the highest big Theta or Big Omega known for any useful problem? I seem to recall seeing n^3 log n somewhere. I am a duffer in this landscape, so I suspect higher levels.

    Yes, n10 or so shows up in the Jerrum-Sinclair-Vigoda algorithm for approximating the permanent, and also in the original HILL construction of pseudorandom generators from arbitrary one-way functions. And I’m pretty sure you could find n20 or even n40 in other places—in most cases, as the end result of composing a bunch of different reductions.

    However, it’s important to understand that, in all of these cases, the huge exponent is almost certainly just an artifact of a crude analysis, and a tighter analysis could (or in some cases, already has) give you a better exponent. (So for example, the blowup from the HILL reduction has since been brought down to n4, I think.)

      3. It seems there might be some way to organize known problems at different levels with reductions, etc., that might be useful.

    Yes—welcome to complexity theory! 😀

    Seriously, you might enjoy this paper by Virgi Vassilevska and Ryan Williams, which gives a “completeness theory” for cubic time: i.e., a large collection of practically-interesting problems for which the best known algorithm takes n3 time, but such that if you can improve on cubic time for any one of them then you also do so for all the rest. I’m sure there’s more to do in that direction.

      3A. Likewise, some organizing of the reductions used in proving problems NP-Complete. All of NP-C is equivalent modulo P; can it be stated what reduces to what at level n^1 or n^2, for example? Would this be useful in problem solving?

    Yes, in some cases people care a great deal about the tightness (as it’s called) of NP-completeness reductions. So for example, the most famous work my wife did (with her adviser, Ran Raz) was to construct 2-query PCPs where the blowup is nearly linear—thereby showing that the 2-query PCP problem is “NP-complete at level n1+o(1),” in your terminology. It’s much easier (though still not very easy) to show that the problem is NP-complete at some large polynomial level.

  165. Or Meir Says:

    Michael Bacon #148: At the end of my comment #144 I discussed the difference between understanding and predicting. I believe it answers your objections.

    Scott #149: Regarding your claim
    “I’d say the belief is “merely” borne out by 400 years of spectacular success at prediction and understanding going hand in hand with one another.”

    I disagree. I don’t see any way to justify the claim
    A. “science has made progress during the last 400 years”.
    without relying on the fact that
    B. “science has improved its predictions during the last 400 years”.

    However, since the belief that you are trying to justify is that “improving predictions requires improving our understanding”, you can not use B to justify A – that would be a circular argument.

    So let me summarize my argument:
    1. There is a difference between being able to predict natural phenomena with accuracy to understanding the world.

    2. The reason that we believe that we understand better the world than 400 years ago is that we are able to make better predictions.

    3. However, the latter belief relies on the belief that prediction implies understanding, which is itself not justified a priori.

    4. If someone rejects the belief that prediction implies understanding, he/she could reasonably argue that there is no reason to believe that science has made any progress in understanding the world.

    In particular, he/she could reasonably argue that Newton’s theory is no better than Aristotle’s theory in terms of understanding the world, even though the former is undeniably better than the latter in terms of making predictions.

    Here is an example from Kuhn’s book: for the sake of the argument, let’s suppose that special relativity was the truth.
    Now, we know that special relativity is radically different than Newton’s theory in the way they describe the world (even though they agree on the predictions in low speeds).
    Hence, Newton’s theory describes the world radically different from the truth. Then, in what sense it could possibly be argued that Newton’s theory is “truer” than Aristotle’s theory? (other than that the former makes predictions that by accident agree with the predictions of special relativity for some parameters).

  166. Michael Bacon Says:

    1.”There is a difference between being able to predict natural phenomena with accuracy to understanding the world.

    Yes there certainly is.

    2.”The reason that we believe that we understand better the world than 400 years ago is that we are able to make better predictions.”

    Yes, that is partially true. It’s the inductive side of why we think the ideas we have today are better than the ideas that we previously had. But it’s not sufficient to explain why current explanations are able to answer questions that weren’t previously asked, or why they generate new questions.

    3.”However, the latter belief relies on the belief that prediction implies understanding, which is itself not justified a priori.

    Well, sometimes it does and sometimes it doesn’t depending on on the prediction was arrived at. Trial and error, blind luck or as the result following up on the clues supplied by the best current explanation regarding the question at hand.

    4. “If someone rejects the belief that prediction implies understanding, he/she could reasonably argue that there is no reason to believe that science has made any progress in understanding the world.”

    Yes, depending on the source of of the improvement in the prediction, one could naively argue that scinese

  167. Michael Bacon Says:

    I apologize, please read my final paragraph as follows:

    Yes, depending on the source of of the improvement in the prediction, one could naively argue that science doesn’t progress, but that would only be if the improvement was unrelated to the the new explanation and arose from things like trial and error or because it was unrelated to a better understanding of the natural phenomena at hand.

  168. Sniffnoy Says:

    Or, I don’t think Scott is making the argument you think he’s making. That is to say, I don’t think he’s claiming that because science has been successful at predicting, it’s been successful at advancing understanding. Rather, if I’m understanding correctly, he’s just taking the fact that science has been successful at advancing understanding as a fact that is evident from the history of science. That is to say, not only do we know that science has been successful at predicting, but we also know how those predictions are made, and it’s not by an opaque process like a machine learning algorithm, but rather by explicit theories that can be understood.

  169. Scott Says:

    Sniffnoy #168: Yes, thank you, very well said!

    Personally, I regard the possibility that science hasn’t advanced our understanding of the world as similar to the possibility that we’re all brains in vats being controlled by evil superintelligent baboons. That is, it’s a speculation that maybe I can’t refute to the satisfaction of the most dogged skeptic, but that seems so spectacularly unproductive that, if your worldview leads you to treat it as more than just an idle amusement, then I regard that as a sufficient reductio ad absurdum of your worldview.

  170. T H Ray Says:

    Timothy Gowers # 93:

    ” … I abandon the statement ‘unicorns exist’ not because it has been falsified but because there are very good reasons to suppose that if unicorns existed then I would know about it …”

    I don’t think so. I am more convinced by the arguments of Karl Popper protege David Miller in *Critical Rationalism: a restatement and defence* in the cleverly titled chapter 3, “A critique of good reasons.”

    As long as we’re in communication, however — please accept my appreciation for your role in *The Princeton Companion to Mathematics.* It is so very well done!

  171. mpc755 Says:

    The whole notion of “falsifiability” is meaningless in mainstream physics.

    The particle does not always travel through a single slit in a double slit experiment.

    How do you falsify the above statement? You place detectors at the entrances, throughout or at the exits to the slits.

    When you do this the particle is always detected entering, traveling through and exiting a single slit.

    The notion the particle does not travel through a single slit is refuted by the evidence. It’s been falsified.

    So, what does mainstream physics do? They ignore the physical evidence which refutes the notion the particle does not travel through a single slit and state that something else occurs when you don’t detect the particle.

    What is that something else? Well, now mainstream physics can make up all sorts of stuff about a multiverse or many worlds or whatever nonsense it wants because you can’t falsify made up nonsense.

    The notion the particle does not travel through a single slit is falsified by the physical evidence.

    However, mainstream physics is so screwed up it can’t understand something as simple as the particle always being detected entering, traveling through and exiting a single slit in a double slit experiment is evidence the particle always travels through a single slit. It is the associated physical wave in the aether which passes through both.

  172. Or Meir Says:

    BTW, Scott, I just recalled a point that I forgot earlier:

    We agreed that scientists often accept or reject theories based on considerations such as elegance, simplicity, and whether they pose interesting problems, and prefer theories that have this properties even if their predictive power is weaker than other theories.

    Now, I agree that this is a reasonable consideration, but the point is that it more or less kills the whole falsifiability theory of Popper. If scientists choose theories based on elegance rather than predictive power, it means that Popper’s description of science has little relevance to the way science is practiced in the real world.
    This is one of the things I meant to when I said that Kuhn has contributed more to my understanding of science than Popper.

  173. T H Ray Says:

    Or Meir # 173

    “If scientists choose theories based on elegance rather than predictive power, it means that Popper’s description of science has little relevance to the way science is practiced in the real world.”

    Popper’s correspondence theory of science is a real world application of Tarski’s correspondence theory of truth. So I think Popper’s method would be relevant no matter what criteria scientists use to choose their theories.

    In other words, a theory is independent of any reason for its existence; if the abstract theoretical language of the theory corresponds to physical experimental results, the theory is scientific. In contrast to other major philosophers of science mentioned here — notably Kuhn and Feyerabend — Popper’s philosophy is about science, not scientists.

    Popper is the quintessential realist: “I should point out … that the correspondence theory of truth is a realistic theory; that is to say, it makes the distinction, which is a realistic distinction, between a theory and the facts which the theory describes; and it makes it possible to say that a theory is true, or false, or that it corresponds to the facts, thus relating the theory to the facts.” (Popper, *Objective Knowledge*)

  174. Rafe Champion Says:

    It is inspiring to find a scientists who has been prepared to read Popper and find out what he actually wrote, in contrast to the misreadings which are almost universal in the literature. I have produced a series of guides to Popper’s major books to indicate that falsification is just the tip of the iceberg of Popper’s contribution (and it is just what good scientists do anyway).

    http://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Daps&field-keywords=rafe+champion

    To understand Popper it is necessary to come to grips with at least six themes in his work, some of which represent a significant turn from the mainstream of epistemology and the philosophy of science. These are (1) the idea that all our knowledge is radically conjectural (2) the idea of public or objective knowledge in addition to our subjective beliefs, (3) the rejection of the quest for the essential meaning of terms, (4) the “rules of the game” or the social turn as Ian Jarvie called it to take full account of the social nature of science and the role of methodological conventions, practices and protocols (5) the evolutionary approach to take full account of the Darwinian revolution and (6) the return of metaphysics to the heart of science and the philosophy of science in defiance of the positivists who wanted to cast it out.

    Happy reading!

  175. Darrell Burgan Says:

    I have an ignorant layperson’s question, alas, but I’ll ask it anyway. Why is the notion of something being falsifiable constrained to the realm of empirical measurement? Isn’t it enough for a mathematical theory that survives all attempts to mathematically disprove it to be considered a well established theory? And if two theories are both equivalent at explaining empirical observations, why is there any bias in favor of one over the other?

    Obviously, I’m asking in reference to the endless debate over whether string theory/m-theory are scientific or not. The fact that they do predict many of the same things that classical theory predicts says something doesn’t it?

  176. Scott Says:

    Darrell #175: It’s a great question. In fact, mathematicians do often think in terms of overarching hypotheses (e.g., GRH, Langlands, derandomization, the Unique Games Conjecture), which generate “predictions” for specific problems that can then be confirmed or refuted (or rendered more or less likely), thereby increasing or decreasing mathematicians’ confidence in the broad hypotheses themselves. Thus, even though the ultimate recourse in math is rigorous proof—and that does make it different from natural sciences—the way mathematicians actually work on a day-to-day basis is a lot closer to how other scientists work than many people would imagine.

    Now, regarding string/M-theory, you’ve put your finger on exactly the sorts of issues people argue about today. As I understand it, string theory has passed some stringent mathematical consistency checks, and it’s also arguably made striking “mathematical predictions”—mostly concerning dualities and mirror symmetries—that were then confirmed by more rigorous methods. So, a string-theory supporter might ask, why isn’t that good enough?

    Other people might respond that it’s not good enough because there are many beautiful areas of pure math (e.g., elliptic curve theory) that also “pass stringent consistency checks” and also “make striking mathematical predictions,” but that need not have anything directly to do with the physical universe. So, passing this sort of test can at most establish string theory as a beautiful branch of math, not as a correct physical theory.

    As you point out, another argument that’s often made is that string theory “predicts” many of the same things that established physical theories do: e.g., it “predicts” that, at low enough energies, the world should look like it’s described by general relativity plus quantum field theory. However, skeptics would reply that that’s not good enough, because GR and QFT were already known at the time string theory was formulated! Indeed, the way you get string theory is (essentially) to start with all the tools for doing calculations in QFT, and then “upgrade” the point particles to extended objects.

    Now, when you do that, you find that you get out GR as a “free, unexpected bonus.” That is indeed a remarkable fact: it’s the fact that got Witten and others excited about string theory in the first place, and that’s helped keep interest in the field alive for decades. But again, skeptics would say that it’s not remarkable enough, if one wants to establish string theory as the true fundamental theory, rather than “just” a good mathematical tool for understanding other theories. For a theory to achieve the status of (say) GR or quantum mechanics, many people would demand more: they’d demand that the theory tell us things about the physical world that we didn’t already know, and which we can then go out and verify are correct. (And of course, many string theorists themselves fully accept that as the standard, and have been trying to meet it.)

  177. Sam Says:

    Just one point about progress in philosophy of science: the Duhem–Quine thesis, which does, I think add a great deal of nuance to falsifiability. Scientific knowledge doesn’t come as simple packaged falsifiable entities, but a web of interrelated knowledge, and falsifying evidence requires a reassessment of the whole system, or at least the relevant parts of the system as a whole. This, to my mind is why it is possible to true a range of empirical tests against a theory, as any theory is essentially a bundle of hypotheses. The important thing is that it produces at least some testable statements. Actually, to my mind it kind of fits in with Bayesian hypothesis testing, which sort of formalises in a more mathematical way the vague philosophical perspective of Duhem and Quine. To my mind any knowledge must be essentially probabilistic, belief by inductive hypotheses rely not on some magical essential property of truth, but the coherence of multiple, repeatable observations which whittle down the probability of chance correlation of observed data – and physical laws become so because have such a negligible probability of being wrong that the probability that that is the case is 0. Scientific theories or hypotheses at the limits of any theoretical system have much higher probabilities of being wrong. Look at Laplace’s arguments for the prior probability for the possibility of the sun rising. More importantly, scientific theories, by being a web of connected beliefs, and by connecting probable observed behaviour to other observed behaviour, help create a set of scientific beliefs which maximise the probability of any one observation or belief being true.

  178. Sam Says:

    Also about history of science, I agree that artificial limitations on future knowledge are absurd. I interpreted Kuhn as making the point should be that in understanding why someone believed x you have to understand their thought process in the context of, say the society they lived in and their erroneous beliefs so as not to produce a caricature of someone’s theory. For example, it some senses it has to be appreciated with Aristotle’s physics that the inductive method hadn’t yet been invented and he was using the deductive method because mathematical and logical deduction was in some sense one of the great levers of intellectual and ‘scientific’ progress in the ancient world. That doesn’t mean one shouldn’t exclude the development of empirical/inductive methods of thinking later, but the explanations of Aristotle’s mistakes are deeper than that he was just simple-minded or that he only misinterpreted evidence – as a society as a whole many of the intellectual tools (I mean, ways of thinking) and technology (partially due to the social structure which didn’t need machines thanks to slave labour) which allowed the scientific revolution just weren’t in place in the Ancient world, and wouldn’t be until the 17th century. But I may have misread him to suit my own ideas and perhaps he does take the more radical (and foolish) position you ascribe to him.

  179. William Newman Says:

    Rahul wrote “I’d love to hear from String Theory fans the answer to this question.”

    I’m not a huge fan, but I am willing to cut the string theorists some some slack. Relativity and QM (1) fit the world exceedingly well, even better than Newtonian mechanics and Maxwell’s E&M did, and (2) hate each others mathematical guts at least as fiercely as Newton’s and Maxwell’s equations did. By thinking very hard about the mathematical inconsistencies between successful theories, the string theorists are continuing a tradition which has enjoyed considerable success in the past. Unfortunately in practice there is room for a lot of nonsense in thinking very hard — we have mercifully forgotten a lot of the silly dead end things that were pondered in trying to work out QM especially — but in principle it’s not obviously a waste of time, even if all you really care about is the falsifiable stuff that (you hope) comes out at the very end.

    Thinking very hard about how to reconcile Newton with Maxwell was very fruitful for Einstein, thinking very hard about how to reconcile Schroedinger/Heisenberg with Einstein was very fruitful for Dirac and for Feynman/Schwinger/Tomonaga — after which I lose the thread ’cause I mostly know about stuff that can be seen without enormous particle accelerators and sensitive cosmological experiments, but there are probably a few other names after that. Out of that thinking very hard fell falsifiable predictions about e.g. mass and energy, about antimatter, and about volumes of precise spectroscopic data for weakly bound electrons and — unfortunately largely obscured by nasty computational difficulties of computational chemistry and solid-state physics — rather large effects in behavior of strongly bound electrons in ordinary materials and chemical reactions. I think by the time you get to QED people had a pretty good idea what kind of falsifiable results they were looking for while they were poking around in the math, but I think Einstein and Dirac would have had a hard time telling you in advance that their theories would tell you about mass-energy equivalence and antimatter respectively.:-| So it’s not always realistic to demand that mathematical physicists pay their falsifiability rent in advance.

  180. Thomas Homan Says:

    What really strikes me about the Mathematical universe hypothesis is the complete naive approach.

    Anybody with basic knowlegde about mathematics, logic, computabilty theory and propabilty theory immeditaly undertands that this is naive at least.

    1) There is nothing like a “set of all mathematical structures” (Russel). This directly leads to inconsistency!

    2) For probability theory to work we need a countable sample space.

    What is possible is to define a constructive subset of mathematics defined by certain (countable many) turing machines. This has been done by Jürgen Schmidhuber and is at least a proper consistent theory of Mathematics/computabilty theory.

    But as of today real world Physics does not even seem to be computable. This means following standard Physics, our universe is not one of the universes defined by Jürgen Schmidhuber!

    I could now go into detais, but the sheer fact that the author does not seem to be aware of this basically takes away all credibilty.

    The next step would be taking into account quickly computable universes and to play around with those.

    Two easy predictions:

    If our universe is constructive (based on any turing machine) then QM is ontologically wrong (Whatever exactly this means).

    Second if our universe is quickly computable, then QM is wrong and experiments should eventually show this
    (Quantum computing does not scale in such a universe).

  181. Phil T Says:

    @Scott 21

    Sorry I’m a year late to the party. Just wanted to remark on this:

    “Thinking about it like a mathematician, I see no reason whatsoever to privilege statements with a single universal quantifier, over statements with any other combination of quantifiers.”

    We do have Universal Algebra going for that one 😛

    http://en.wikipedia.org/wiki/Universal_algebra

  182. Does Science Need Falsifiability? - The Nature of Reality — The Nature of Reality | PBS Says:

    […] what science really aims at,” argues MIT computer scientist Scott Aaronson, writing on his blog Shtetl Optimized. Yet, writes Aaronson, “falsifiability shouldn’t be ‘retired.’ Instead, falsifiability’s […]

  183. Guenter Says:

    @Scott: What books of Popper did you read? Was Popper confronted with the “doofus example” (the zebra exyample)? How did he reply?