Worldview Manager is live

A year ago, I wrote a blog entry seeking summer students to create a Worldview Manager: a web application that would prompt users to state their beliefs about various statements, and then notify them if two or more answers were in “tension” with one another, giving them the chance to modify their beliefs and thereby resolve the tension.  The idea was then covered in a short piece by Lee Gomes at Forbes.com.

I ended up selecting two students: Louis Wasserman of the University of Chicago, and Leonid Grinberg of Belmont High School.  They excelled at all aspects of this project, from low-level hacking to high-level design decisions.  If there are enough young ‘uns like them, I feel better about the future of the United States.

As a result of their work, I’m pleased to announce that you can now try Worldview Manager here.

Currently, we have only a limited selection of “topic files”, all of them rather nerdy: Complexity Theory, Strong AI, the Axiom of Choice, Quantum Computing, Libertarianism, and Quantum Mechanics.  However, if there’s enough interest, we’ll probably add more topic files soon.  In the meantime, if you have any interest in contributing topic files yourself, please let me know!  It’s actually not hard.

And if you have praise, gripes, constructive feedback, nonconstructive feedback … hey, the comments section is right underneath this sentence.

77 Responses to “Worldview Manager is live”

  1. Evan Jeffrey Says:

    The quantum mechanics section assumes the statement “mixed state, as the most general representation of an agent’s knowledge of a quantum system, are more fundamental than pure states.” This makes some positions untenable regardless of what you choose for the other options. Evidentially none of your testers are “pure state of the universe” people…

    Otherwise it looks pretty awesome.

  2. Rui Ferreira Says:

    I think there’s going to be a lot of polemic with your definition of conflicts. Let me start 🙂
    * A be the statement:
    Others should be judged only by their words and actions and not their physical appearance or some other external attributes.

    * B be the statement:
    A simulation of a conscious being is conscious.

    Then A implies B because
    A simulation, by its nature, acts and says things indistinguishable from that which it simulates.

    Lets say we have a machine that runs all the possible programs (they are countable) and can be run with quadratic delay. (example P1 instruction1; P1 i2, P2 i1; p1 i3, p2 i2, p3 i1; etc) Then it’s possible to have the simulation of a intelligent program without being intelligent by itself. the K(program that simulates)

  3. Kyle Says:

    Users need to have some sort of progress notification!

  4. Anonymous Says:

    Specific feedback:

    1. The colored bars showing amount of agreement or disagreement with a statement on the result page need help. Why are they different lengths? Why are there two of them? Why does 40% agreement show up as dark blue next to the pink on the top bar and 70% agreement show up as dark blue next to the light blue on the bottom bar? Why is there a nub of blue on the top bar when I’m in 100% agreement with a statement? (Firefox on Linux in case it’s a browser issue.)

    2. I’d appreciate a progress bar while filling out the statements.

    3. You may wish to find a student to make a prettier web page for you.

    General feedback:

    I’m not sure it entirely makes sense to apply a percentage to a general statement that is easy to both strongly agree and disagree with depending on the situation. The example that made me saddest was “Free speech in all forms should be allowed, even when it offends someone, incites violence, or places others in immediate danger.” The Supreme Court has made a distinction between the first and third cases, and I would like agree with them without being “neutral”.

  5. William Says:

    Wow, would it be possible to add some soothing music to the “resolve conflict” screen. Because my heart rate was going through the roof after only a few conflicts. Your team has done a very good job of automating the Socratic booby trap.

    It might be nice to be able to look up definitions once a conflict arises, or at least see what other users think about possible definitions of a term. I’m having a lot of trouble figuring out what the appropriate definition of a physical body should be in the strong AI section.

    To add to what anonymous says in #4 about statements where disagree, neutral, and agree are not appropriate responses, maybe you could allow us to skip statements.

  6. WF Says:

    I agree with Rui about the conflicts the “other should be judged…” statement, though I’d just say that it would seem that the statement “X is a simulation” would also be in conflict with that statement.

    I was going to say that using Worldview Manager is a bit like arguing with a certain type of geeky internet commenter, except that it’s the other way around 🙂

  7. Liron Says:

    I refuse to resolve this tension, in which I believe A and B and D. I only believe D in the sense that mind uploading (C) is theoretically possible.

    Other than that, this is a great experiment and looks very well executed.

  8. onymous Says:

    When measuring a quantum state, we have the freedom to choose the measurement basis (for instance, whether we want to measure the position or momentum).

    In principle? In practice? In some subtler way that is going to be a “gotcha!” once I answer a few more questions? Hmm.

  9. Liron Says:

    Also, the whole time I was doing Libertarianism, it kept pestering me about my tensions from the Strong AI section, even at the Libertarianism ending screen.

  10. Evan Jeffrey Says:

    That is not a gotcha. I had no problem getting a consistent answer to the quantum computing section (which is much shorter than the strong AI topic…)

    Liron, I also had that conflict for the same reason. There didn’t seem to be a satisfactory way to resolve it even accounting for “what they actually mean.”

  11. onymous Says:

    The explanation here is incomprehensible and seems to have missing words?

  12. Chris Says:

    The program seems to be asking people for their degrees of belief with the slider option, yet you take even the slightest bias as certainty in that direction. If people do indeed believe (or not believe) with certainty these statements, then no argument you provide will sway them – it will more likely strengthen those beliefs!

    Perhaps a probabilistic “extension of logic” is more appropriate.

    James C. Maxwell:
    “The actual science of logic is conversant at present only with things either certain, impossible, or entirely doubtful, none of which (fortunately) we have to reason on. Therefore the true logic for this world is the calculus of probabilities, which takes account of the magnitude of the probability which is, or ought to be, in a reasonable man’s mind.”

  13. Evan Jeffrey Says:

    Also, while it does seem to have met the stated goal of allowing multiple self-consistent sets of beliefs, I still can’t help get the feeling that Socrates is trying to convince me that libertarianism is irrational and that the computation complexity form of the church turing thesis is wrong. Since i agree, it wasn’t a problem 🙂

  14. Chris Granade Says:

    I like it! A few statements are a bit confusing, but the concept and execution are wonderful.

  15. onymous Says:

    I give up on the quantum mechanics one. Something is really broken there, but I’m not sure what. I think it’s assuming my answers to questions that it didn’t ask me.

  16. Carl Says:

    This program should be named “The Hobgoblin” in honor of Emerson.

  17. Matt Leifer Says:

    It is a very interesting site or at least very amusing site – I am not sure how seriously to take it at the moment. A few comments:

    – The complexity topic gave a 404 error when I tried it.
    – Some of the questions are very US centric, particularly in the Libertarianism topic. What difference does it make to me what it says in the US constitution if I am not a US citizen? What if I live in a country that does not have an official written constitution?
    – I don’t follow some of the reasoning in the Quantum Mechanics topic. I am repeatedly being told that I am inconsistent even though I am fairly convinced that my position is still an open possibility (of course, experts may be even more likely to be inconsistent than non-experts in this case). In any case, I’d like to see the source file so that I can hack my superior belief system into it.

  18. Ilmari Says:

    If you believe that people should only be judged by words and actions, then you believe that those are enough to judge whether or not someone is conscious.

    I do not remember saying that one can determine if a being is conscious. Even if I did, the conflict page did not say how.

  19. John Sidles Says:

    That is a wonderful application! Brilliant! (in the best Harry Potter sense of the word). These students are to be congratulated!

    As an exercise, I took the quantum test, while confining my responses to “completely agree” or “completely disagree”.

    This exposed a (what seems to be) a bug in the ruleset (“http://projects.csail.mit.edu/worldview/tension.php?tension_id=10746”), in which the following is a theorem: EC, where

    E: “Decoherence provides a fully satisfactory solution to the interpretive problems of quantum mechanics.”

    C: “The measurement problem is fundamentally unsolvable, just like the “hard problem” of consciousness.”

    Unless I’m totally off-base, isn’t it the case that not too many people would accept that EC?

    An axiom that perhaps should be amended or omitted is “One of C or D“, where

    D: “It’s reasonable to hope that future experimental or theoretical discoveries will clarify the interpretation of quantum mechanics.”

    The problem in asserting “One of C or D” as an axiom is (of course) that it neglects Monty Python’s “French Knight Alternative”:

    “Grail? No thanks! We’ve already got one!” 🙂

  20. Scott Morrison Says:

    Liron #7.

    I agree, I refuse to resolve #, but for a slightly different reason. As per William in #5, there’s a big difficulty with “material body”.

    I’m happy contemplating a conscious being with no material body, but having an entirely physical ‘description’, for example existing only as interactions of gravity waves.

  21. Scott Morrison Says:

    More generally, it seems you could usefully have a list of “most popular unresolved tensions”. It may well be that many things appearing on this list are because of problematic questions.

  22. Nick Langille Says:

    The Strong AI topic is a long slog, and in my opinion, contains a lot of statements that depend your definitions of words like “conciousness”, “emotions”, and “simulation” in ways that lead to lots of spurious tensions.

    The Libertarianism one is good, and in reviewing my answers I found one interesting tension. The system didn’t point it out to me though, maybe my answers just didn’t meet a certain threshold amount of tension. The tension was between government enforcement of harsh contracts and the rights of adults to make their own decisions (like signing contracts).

  23. Evan Jeffrey Says:

    John, my interpretation of statement E implies my interpretation of C, I think. The way I see it, the decoherence interpretation (or non-interpretation?) of QM is essentially saying that the measurement problem is asking the wrong question. In that sense, I guess that I would say that relative to E, C is equivalent to the French Knight Alternative.

  24. g Says:

    I agree with other commenters above that there are some extremely spurious “tensions” in the Strong AI topic. I’ve commented more extensively on those in the WM itself, so shan’t go into details here.

    And yes, the “you have unresolved tensions” notice should distinguish between unresolved tensions in the current topic and unresolved tensions in others.

    Regardless, this is extremely neat. Well done!

    It’s unfortunate that submitting a comment causes a page reload and therefore loses whatever assessment you’d made to the question at issue. Especially as the *first* thing that happens when you submit it is a nice AJAXy update that doesn’t.

  25. Anonymous Says:

    It’s really slow. The latency between each question is annoyingly high. I wish you had put all of the questions on the same page, one on top of another, so that I could go down the page, dragging sliders, and it could alert me of a tension (perhaps in a sidebar on the right) as soon as it spots one — without requiring me to click “submit” and wait for the web server to slowly respond.

  26. g Says:

    Whoa, there are some serious bugs in the QM topic…

  27. g Says:

    It could stand to do a better job of not bothering you with redundant tensions. That is, I sometimes found that it told me both “You said A and B, but A => C => not-B” and “You said B and C, but C => not-B”.

  28. Anonymous Says:

    The site seems to attempt to find tensions by treating responses to the questions as black-and-white matters, but most people are likely to respond in terms of shade of grey.

    The inference rules for logical propositions (e.g., given premise P and premise P=>Q, we can conclude Q) apply when we are dealing with logical propositions. A logical proposition must, by definition, be either true or false: it can’t be 70% true. Yet your site has people answer “I agree 70% with such-and-such”. As such, it’s not dealing with logical propositions, and it is a logical fallacy to apply the rules of inference for logical propositions when the underlying assertions are not logical propositions. As a result, the reasoning used by the web site is unsound in a fundamental way. I saw several instances of this logical fallacy in my own use of the site.

    For instance, suppose I 60% agree with P, and I 60% agree with P==>Q. It doesn’t follow that I’d better agree with Q more than I disagree. If I say that I tend to disagree with Q more than I agree, that’s not a logical inconsistency. (For instance, maybe I believe that P is right about 60% of the time, and I believe that P==>Q is right about 60% of the time. Then it could be the case that Q is right only about 36% of the time.)

    In general, I think you need some kind of logical system that is able to reason with shades of grey. Fuzzy logic? Probability theory? I don’t know what to recommend, but the current approach seems to have some problems.

  29. g Says:

    I haven’t seen any instance where it says “You indicated that you’re completely sure of A, but almost neutral about B. But A very definitely implies B, because […], so you should be at least about as sure of B as of A”. That would be nice, though you might want to use a term other than “tension” for such cases. (Of course this generalizes beyond the simple case I just described.)

  30. Markk Says:

    I thought it was no good. I tried the QM area and the “tensions” were, I guess all I can say, stupid. Standard Quantum Field Theory answers give irreconcilable differences. Almost worse than useless personally – quite misleading. So much a downer that the effort I was thinking about to point out what I thought was wrong I don’t right now feel is worth it.

  31. g Says:

    I wonder whether the brokenness of the QM topic is related to the fact that its topic file references a proposition INFO that isn’t actually defined. (It is also allegedly implied by both MIXED and !MIXED, which doesn’t seem likely to be right. On the other hand, supposedly UNIVERSAL => !INFO, from which we deduce !UNIVERSAL unconditionally, which seems kinda sad.)

  32. Jeremy H Says:

    A few comments:

    +1 to having a progress bar

    Don’t display discussion about a question until after I submit a question. Otherwise, the discussion will bias my answer.

    Below the slider, give a paragraph explanation of the question, to resolve the ambiguity that many people have complained about.

    Overall, a very interesting idea. Could you perhaps publish an API for creating new topics. This would allow people to extend it easily, and hopefully crowd source some QA.

  33. algebrumbertheorist Says:

    This may be a cop-out, but I want a “MU” button to unask questions that I consider to be meaningless. (Possibly just clicking “Completely Neutral” essentially has the same effect at present?)

  34. Leonid Grinberg Says:

    Hello, this is Leonid, one of the students that worked with Scott on the website. I would just like to say thank you for all of your comments and bug reports — we will try and fix as much of them as possible.

    However, since several people asked to look at the source and API and such, I’d just like to point your attention to http://worldview.csail.mit.edu/about which contains a link to both the public git repository as well as to the manual, which contains a complete description of both API for the site, as well as the format of the topic files. Unfortunately, we don’t have a feature for users to automatically upload topic files to the site.

  35. harrison Says:

    Scott, in talking to Louis (and looking at some of the data), I got the impression that it was resolution-based, so that if for instance you answered A AND B, when A AND C is a contradiction and B AND ~C is a contradiction, then it’ll give you a tension. Maybe I just haven’t answered the questions right, but it doesn’t seem to be behaving that way. Is it supposed to?

  36. Jair Says:

    I really like it. I found the section on “Strong AI” very thought-provoking, especially after reading Douglas Hofstadter’s “I Am A Strange Loop”. At the same time, I think a lot of people would hate it. For me, it is interesting to explore the logical consequences of beliefs; when the system detects a tension, I take it as a spur to conversation and reflection. I imagine others would take it as a personal attack on their value system!

    Many seem to be complaining about a lack of definitions and specificity, but any discussion in the English language is bound to be imprecise. Any definitions for ideas that aren’t purely mathematical are going to be circular or else reliant on yet more undefined terms.

    I do wish that there would be an option to wait until the end of the survey for the list of discrepancies. Toward the end there were several inconsistencies were all related, and I felt like I could not make any progress toward completing the survey.

    Also, I will say that the choice of topics definitely reflects the interest of the designers! I wish there were more options that were accessible to those who aren’t technically minded.

  37. Akhil Mathew Says:

    I like the idea quite a bit, but there seem to be a few confusing statements in the one about the axiom of choice:

    1. As some people have mentioned, it didn’t seem clear that “the existence of a translation and rotation-invariant measure on R^3” meant *every* subset of R^3 is in that sigma-algebra, because I initially assumed that Lebesgue measure would work (assuming the axiom of choice). After all, don’t people refer to Lebesgue measure as rotation and translation-invariant?

    2. For “the cartesian product of nonempty sets is nonempty,” I think this could be phrased as “the cartesian product of a nonempty collection of nonempty sets is nonempty” to be absolutely sure to avoid any confusion; I didn’t think it was entirely clear that you were allowing more than two sets in the cartesian product.

  38. Greg Kuperberg Says:

    I took the one on libertarianism. It became frustrating. The “manager” caught me on one contradiction in my answers that I hadn’t thought through, which was arguably a little bit helpful, but mostly it did not discover much “tension”.

    What was frustrating was that the quiz did not acknowledge the tendentious nature of politics. People comfort themselves with euphemisms. They appeal to sweeping principles selectively. For instance what does it mean if you don’t think that “Certain groups of individuals should be denied some rights that others are granted”. It could mean that you think that gays have the right to marry. Or it could mean that you think that gays should not have the “special right” to rewrite marriage laws.

    Generally over time I’m less and less impressed with abstract “world views” in politics, and more interested instead in what people really want in practice.

  39. Dan Says:

    There needs to be an “I don’t know button”.

    “Can quantum computers be built?”

    I don’t know! Am I supposed to select “Completely neutral in that case? I don’t know!

  40. Josh Says:

    I agree with Anon#25: it’s waaay too slow. I don’t know if I agree that the way to solve this is to have all the slides on one page, and in fact that might be counterproductive. One of the interesting thing is that, by having sliders separate, you’re less likely to “catch yourself” before stating an inconsistent belief, and more likely to state what you “truly believe.” But I still agree that it’s too slow. :/

    Still fun though!

  41. mitchell porter Says:

    I just ploughed through Quantum Mechanics, deferring all unresolved tensions. If I have counted correctly, from 17 opinions I generated 93 unresolved tensions. 🙂

  42. Sim Says:

    I love the idea, however: 1) waaay too slow 2) 500 internal errors 3) many statements need clarifications 4) all ABC logical chains I saw sucks.

  43. I Agree with Greg Kuperberg Says:

    I agree with Greg Kuperberg’s comment whole-lungedly.

  44. Jpa Says:

    WM doesn’t seem to like parallel computation (try starting two tests in parallel).

  45. David Says:

    Abour as balanced as the balanced samples in Yes prime minister episode 2: http://www.yes-minister.com/ypmseas1a.htm

  46. foo Says:

    There are a few bugs to be ironed out with session information. I started the complexity theory topic, then I found myself out of my depth, and switched to the libertarianism topic. The first question that greeted me in trying to determine my views on libertarianism was “Polynomial Identity Testing is in P”. Yes, I believe that Congress just tried to pass a non-binding resolution to that effect.

  47. matt Says:

    I think there’s a problem with this formalization because each of the axioms (A->B etc…) is a formal mathematical statement about sentences which are themselves vague and open to interpretation. So, each step in the reasoning is mostly right (what I mean by the sentence roughly corresponds to what you mean) but by the time several steps are taken the meanings don’t correspond any more. i.e., I say A and not C, and WM says this is inconsistent because A->B, B->C. However, I think A implies a certain meaning of B, and a slightly different meaning of B implies C, but that isn’t the meaning of C I was disagreeing with when I said not C. So, I think the Worldview Manager provides a counter-example to an old blog post of yours ( https://scottaaronson-production.mystagingwebsite.com/?p=232 ) showing that sometimes it isn’t right to follow modus ponens blindly.

  48. matt Says:

    Btw, let me ask: since I think each of the axiom choices is a little up to dispute since the meaning of the sentences is a little vague, did you have to tweak the axioms a few times before they were logically consistent themselves?

  49. John Sidles Says:

    Matt Says: Did you have to tweak the axioms a few times before they were logically consistent themselves?

    LOL! We can be confident that these students have already done plenty of “axiom tweaking” … and that there is plenty more to come.

    Because these students are becoming well acquainted with academia’s single most valuable lesson: “The teacher learns more than the students.” Or in the case at hand, “The programmers learn more than the users.”

    A very natural question is: What complexity classes best capture the real-world informatic hierarchy of teaching versus learning (TL)?

    Because there is no doubt that finding ways to teach subjects well is a very hard problem!

    Hmmm … wouldn’t a world where T=L be similarly radical to a world where P=NP? It would be a world with no need for professors … or professional programmers either … because it would be efficient for everyone to be their own professor and do their own programming! 🙂

  50. John Sidles Says:

    Just to follow-up on the above TL theme, at this summer’s FOMMS conference there was a great deal of professorial hand-wringing over what was called “The Education Problem”: the near-total inability of undergraduate-degree students to apply simulation algorithms effectively.

    And this quality was perceived at FOMMS to be pretty clearly correlated with that elusive quantity called “mathematical maturity”, one element of which is said to be “learning through understanding rather than by memorization”.

    A heuristic discriminator of mathematical maturity—that works pretty darn well in the real world—is “An ability to read ‘yellow books’ with understanding and enjoyment” … even better is “An ability to write ‘yellow books’ with understanding and enjoyment.”

    And this gets to the heart of what (IMHO) Scott’s students have attempted to create (and IMHO have largely succeeded in creating).

    Doesn’t their engine of axioms and inference rules amount to a fully instantiated cognitive framework for teaching-and-learning hard subjects like information theory, quantum mechanics, and libertarianism at the ‘yellow book’ level?

    If so, bravo to Scott for setting this challenge, and to these students for outstandingly fulfilling it! 🙂

  51. Warren Says:

    A lot of the tensions in the “strong AI” topic rely on implicit definitional assumptions such as “consciousness is a feeling” and “all minds are conscious”. I suggest making these into explicit world views that people can disagree with. Alternatively provide precise definitions for all important English words used.

  52. John Sidles Says:

    Warren Says: [the students should] provide precise definitions for all important English words used.

    Gosh … everyone who has ever attempted this has failed … starting with Samuel Johnson.

    We could imagine a world in which English was codified as axiom-based, logically dynamical language … but wouldn’t the result be a world of people who thought like robots, rather than a world of robots who thought like people?

    This project will serve its purpose best (IMHO) if the axioms have the right amount of “play” in them … such that the resulting inferences are not excessively rigid, but neither are they inchoate … and I think the students have struck this balance pretty nicely.

  53. Bram Cohen Says:

    The page load times are a little on the slow side.

    The combination of slider and submit button is extraordinarily clunky from a UI standpoint. It should be possible to simply click on some kind of range (like on hotornot, for example).

    I did it until I hit a tension between ‘children should have the right to vote’ and ‘everyone should have the right to vote’. I slightly disagreed with children, and slightly agreed with everyone, specifically because they’re in conflict with each other. If I fully agreed with both obviously that would be a conflict, but I did slightly because the UI doesn’t allow the listing of specific caveats.

  54. Sim Says:

    RE: 4) all ABC logical chains I saw sucks.

    It sucks for “Strong IA” but looks fine for “Axiom of choice”. As many suggest, the problem is with what Warren refers to as “definitional assumptions”.

    One improvement may be the following: when a tension is found, ask for the change in assumptions that would resolve the conflict, and then add it as a separate question. I bet Plato will approve this kind of wiki.

  55. Bram Cohen Says:

    Okay, I messed with it some more and with some slight adjustments for the extreme literalness of the questions, I got the libertarianism one totally consistent.

    For the axiom of choice one the statement ‘Given any collection C of non-empty sets, there exists a set X such that for each S∈C, X and S share exactly one element.’ is unambiguously false, even in the finite case. Likewise, I think the statement ‘Every set S has an ordering such that each non-empty subset of S has a minimal element under that ordering.’ is false with or without the axiom of choice – the set 1/2^k for all integers k has no minimum element.

    There’s a typo in ‘For any two sets A and B, there is either an injective map A→B or an injective math B→A.’ It should say ‘map’ instead of ‘math’.

    I was surprised how little conflict I had on the axiom of choice one (no conflicts except for the two above, and those I suspect are misstated). Actually I’m very curious as to why my answers didn’t conflict, I thought some of them would.

    The big missing features (aside from the things I complained about before) are user contributed questionnaires, and the ability for it to remember your answers and compare them to other peoples’s.

    Also, it’s strange that there’s no questionnaire for the logical conflicts of the religious person who you had the discussion with on the plane, which was presumably the inspiration for making the whole site in the first place.

  56. Joe Shipman Says:

    Good idea, very buggy data. The section on the Axiom of Choice ought to be the best one but even there there is a bug.

    I believe extremely strongly that people have a moral obligation to maintain consistency in their beliefs, and further to resolve tensions between logic and emotion (not always in one direction — logic can indicate inappropriate emotional responses, but emotions can also alert one to mistaken premises or subtle reasining errors). So this is potentially a killer app, but it needs much more work along the lines indicated by all the commenters here and on the site itself.

    The slider would be good if there were correct formulas to resolve conflicts betwen partial beliefs — if one gives 80% agreement to A and 90% agreement to B then one’s agreement to C that is logically implied by (A&B) had better be at least 70%.

  57. Stephen Harris Says:

    There were several comments in the Strong AI section that complained about the definition of “conscious”. The primary definition of conscious is “awake”. The survey used the second definition of “conscious” which was something like “the potential to exhibit conscious activity” that is, could become awake and not doomed to remain in coma. What is conscious activity? I don’t think you can call it conscious behavior, because there are disembodied brain questions and whether those have consciousness. Even the term “consciousness” has the coming to awareness connotation as in he regained consciousness. I think maybe the idea of self-awareness is closer to the concept that the survey wants to represent as conscious.

  58. Sim Says:

    In the QM section, I’m facing funny explanations such as “either QM demonstrates the reality of parallel universes or consciousness must play some fundamental role in the laws of physics.” or “future clarification of QM implicates QM demonstrates the reality of parallel universes.”. The last one is delicious 🙂

    I think the problem comes from the following: if the program begin with
    A=”Ted is a dog”
    B= Ted is a cat”
    A implies not B
    B implies not A
    it will then reach the conclusion that “one of A or B”. No: the conclusion should be “one of A or B or neither” because, well, Ted might be neither a dog nor a cat.

  59. wolfgang Says:

    Scott,

    I think your manager is great, I really enjoyed it.
    Where Wittgenstein ran into a brick wall you may yet succeed.
    I think your manager is the first example of Web 3.0 a.k.a the consistent interwebs.
    Bloggers everywhere will check the consistency of their worldviews and rationally reduce the tensions they find…

  60. John Sidles Says:

    Sim is 100% right IMHO … this is perhaps a (fixable) bug in the logic engine … the result might would be an engine that instead of deducing “There is tension in embracing these particular beliefs” would assert “Perhaps you should look for alternative beliefs that triangulate this particular tension”.

    A real-world tension-relaxation example having considerable practical importance is the tension between (on the one hand): “Simulation of quantum systems by classical computers is possible, but in general only very inefficiently” (Nielsen and Chuang, section 4.7.1), and (on the other hand) the tension-creating yet well-established empirical fact that “The simulation of broad classes of quantum systems by classical computers is reasonably efficient and remarkably accurate”.

    This tension can be resolved by a triangulating belief “The state-spaces of noisy and/or low-temperature quantum systems are the dynamical concentration manifolds of (non-symplectic) Lindblad-Ito vector flows that are efficiently simulatable with polynomial classical resources.”

    The resulting ontology … in which quantum computers are provably infeasible to simulate …. yet broad classes of real-world quantum systems are feasible to simulate … and the boundary between these domains has an exceedingly rich mathematical structure … well, isn’t that a very congenial world for mathematicians, biologists, chemists, and physicists to live in? … and it is a paradise for quantum systems engineers and entrepreneurs! 🙂

  61. Gil Kalai Says:

    Very nice project, Scott
    Congratulations!
    Gil

  62. John Sidles Says:

    Wolfgang foresees: Bloggers everywhere will check the consistency of their worldviews and rationally reduce the tensions they find…

    Wolfgrang articulates a dream that is 324 years old …

    “The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: ‘Let us calculate [calculemus], without further ado, to see who is right’.” Gottfried Leibniz, The Art of Discovery, 1685

    It would be a mistake, however, to regard the above passage as any kind of summary of Leibniz’ thinking … see Israel’s Enlightenment Contested, chapter 8, Newtonianism and Anti-Newtonianism in the Early Enlightenment for an in-context account … in essence, in the above passage Leibniz and his (radical) colleagues are seeking to hold open to debate, moral and religious propositions that the (moderate) Newtonian school strongly desired to hold as inarguable axioms.

  63. Aaron Denney Says:

    It appears that the quantum mechanics topic has been taken down. Unfortunately, this has left dangling links in my deferred section at http://projects.csail.mit.edu/worldview/tensions/deferred I suppose they’ll eventually go away.

  64. pete Says:

    Hi John (#62)

    The dream of a philosophical calculus might be even older than that:

    http://en.wikipedia.org/wiki/Ramon_Llull#Ars_generalis_ultima_.28Ars_Magna.29

    What’s the betting it doesn’t stop there … ? 🙂

  65. AF Says:

    A more systems-related question: what is the backend technology that the site is implemented on?

  66. John Sidles Says:

    Pete says: The dream of a philosophical calculus might be even older than that …

    Thank you for that wonderful link to the life and works of Ramon Llull (of which I was completely unaware). For Llul to think these thoughts in the thirteenth century is amazing … and encouraging.

    Perhaps when E. O. Wilson wrote “The Enlightenment launched the modern era for the whole world; we are al its legatees. Then it failed. Astonishingly—it failed.” … Wilson was being too impatient! 🙂
    —–

    Aaron Denney Says: It appears that the quantum mechanics topic has been taken down…

    For me, it’s back up … but it still asserts some problematic axioms … like this one:

    —–
    “One of C or D“, where

    C: “Quantum mechanics is fundamentally a theory about information and knowledge, not ontology (i.e., what “really” exists).”

    D: “Mixed states, as the most general representation of an agent’s knowledge of a quantum system, are more fundamental than pure states.”
    ———–
    Is there any easy way to see why “C XOR D ” is so obvious (or so cool in its implications!) that it should be embraced as an axiom?

    Or is this too picky … maybe this project is all about constructing consistent narratives?

    If so, a possibly valuable feature, at the end of the assessment, would be to identify the opposite of logical tensions … namely axiomatic Maguffins … philosophical axioms that sound cool, but are logically independent of one’s belief system! 🙂

  67. Andre Chalom Says:

    A constructive feedback: I started taking the Strong AI questions. After what seemed a lot of questions, I started to get really bored with the fact the questions were “setting me up” to point the inconsistencies. Not a nice mood to be in if you expect the answers to be true. Also, it would be like to know in advance how many questions are left unanswered.

  68. rrtucci Says:

    Walmart Memo

    For our market research on worldview manager, we asked 10 typical American males (aka as Joe six packs) to try the product on a laptop. 6 of them threw the laptop across the room after 6 questions, 3 threw it after 9 questions, and 1 after 10. They were all furious with a Mr. Aaronson smartass for contradicting them. On the basis of this research, we do not recommend that this product be sold at Walmart, unless it is given away for free with a sticker that proclaims it to be a gift from the labor unions.

  69. Raoul Ohio Says:

    ** Two sets of 13 things that do not make sense. **

    Related to the question of what you believe, NewScientist in 05 and again today has published a discussion of 13 things that do not make sense:

    March 2005:
    http://www.newscientist.com/article/mg18524911.600-13-things-that-do-not-make-sense.html

    Sept 2009:

    http://www.newscientist.com/special/13-more-things

    A few are kind of lame, but most are fundamental dilemmas in physics and cosmology. I was particularly gratified to find that my suspicion that inflation theory is ad hoc wishful thinking is widely shared.

  70. Patrick C Says:

    I think there is a problem with the concept behind Worldview-Manager.

    It seems that rather than considering the potential for conflict in their worldviews when informed of conceptual tension, Users are instead assuming that the worldview manager is incorrect(or buggy); believing their views to be correct, and the worldview manager to be in tension.

    I’m thinking that this is not the result you were looking for.

  71. Lewis Powell Says:

    Having only looked at the libertarianism section (and having survived it without any significant tensions!), I am worried that perhaps you are overlooking some of the complexities in the logic of deontic modals (‘should’ and ‘ought’). I don’t know how you are calculating tensions, but just as a brief example of how things can go haywire with deontic modals, here is one oddity in deontic logic: apparent failures of detachment.
    Detachment is the inference from [if P, then Q] and [P] to [Q]. This seems to not always work for conditionals including deontic modals. For instance:
    If Tom wants to get his inheritance early, he ought to murder his rich uncle.
    Tom does want to get his inheritance early.
    But it is not true that Tom ought to murder his rich uncle.

    The SEP article on Deontic logic offers a decent introduction to some of the issues that come up:
    http://plato.stanford.edu/entries/logic-deontic/

  72. g Says:

    Patrick C: Ho ho. But while WVM (by which I mean the software together with its topic files) is new and visibly buggy, it’s not really unreasonable for users to react that way.

  73. Stephen Harris Says:

    Perhaps this applies to other areas but I was thinking about the Strong AI worldview and the Turing Test. You want a Turing Test Passing Program (TTPP) to be able to answer some sophisticated questions in order to establish that the program demonstrates an apparent intelligent level of understanding, deemed intelligent if a human were to provide the same answers. I wondered about how one would distinguish the human responses from a good TTPP candidate? The human responses were various and contradictory besides often showing internal consistency. How do you tell the difference between human inconsistency and a poor level of understanding of the potential TTPP? Force the TTPP to make mistakes so that it appears more like the human responses? Well, several of the human responses were that their responses were consistent, it was the Worldview testing program which was wrong when it judged certain of their responses as inconsistent. There was the matter of no consensus about definitions of words like conscious. Here Louis Wasserman and Leonid Grinberg acted in the role of a panel of judges in the more tradition email version of the Turing Test (TT). How does one prove that their judgments about what is consistent are correct? I suppose one could solicit feedback from a bunch of real people taking the test.
    I’ve long felt the TT was a dubious tool in evaluating software, whether it was “conscious” or “intelligent”. Is it required to have consciousness in order to demonstrate intelligence, so that software which demonstrates intelligent-like behavior is not necessarily conscious(which is weak AI)? Is it ok to call sophisticated answering machine software like AT&T uses intelligent, or have another word for it? But this test made me realize besides wondering about what the result of the said about the program, it also called into question how difficult it is to even create a valid and sound set of questions to test the insolent puppy AI TTPP candidate. I think it’s just about as hard, making up the test, as to create a program which will pass it. It reminds me of IQ tests.
    At the upper end of IQ tests, the harder questions/answers tend to become random for average humans and only clump (discover a significant pattern) toward a specific right answer for those humans who are quite bright. It showcases subjectivity, and that notion seemed to me to underlie the criticism of many of the human complaints, although they more often felt their own issue when mentioning a shortcoming was more objective.

  74. John Sidles Says:

    I would greatly welcome … and I think lots of folks would welcome … a follow-up post from Louis Wasserman and Leonid Grinberg, summarizing what they have learned (so far) from this project!

  75. orthonormal Says:

    The Axiom of Choice section is in dire need of a mathematically literate editor. The claim that Lebesgue measure exists is interpreted as the claim that all subsets of R^3 are measurable, despite the fact that said measure is defined only for a particular sigma-algebra of measurable sets. Worse, the direct “Axiom of Choice” statement is misstated and trivially false.

  76. Pat Cahalan Says:

    > We could imagine a world in which English was
    > codified as axiom-based, logically dynamical
    > language

    I’m trying to make a joke about Esperanto, but I can’t make it funny.

  77. Jim Graber Says:

    Sorry to be late to the party. I just tried the worldview manager. I really like the idea.

    We have needed something like this for a very long time.
    I hope it succeeds, expands and prospers.

    As John Sidles points out, this longing goes back through Godel and Hilbert all the way back to Leibniz.

    Based on my trial, I have three suggestions:

    1. All the terms need definitions.

    2. The axioms of the systems should be exposed.

    3. It needs an explicit connection to an interactive theorem prover (as elementary as possible).

    Definitions because these fields are rife with disagreements about terminology.
    Furthermore some concepts are quite subtle and require fine distinctions.
    Lastly, some of us are unfamiliar with some of the more advanced concepts.

    Axioms as well as definitions to result in communication as complete and unambiguous as possible.

    Theorem prover to allow users to develop further or alternative inputs, perform consistency checks, and better understand the conclusions of the manager.

    I am unable to “resolve my tensions’ or determine if/whether/how much I agree with the worldview manager, primarily because I regard so many of the terms as ambiguous and undefined.
    When I have time, I will play around with it more and see if I can tease out What the hidden assumptions and definitions are.
    Thanks to all for putting this up.
    Jim Graber