In my interpretation, the paradox then gets all its mileage by reasoning about what one Wigner’s friend is certain that another Wigner’s friend is certain of, etc., but with these certainties applying to a world where the Wigner’s friends are not measured in strange bases (e.g., |Thought_{1}⟩+|Thought_{2}⟩, |Thought_{1}⟩-|Thought_{2}⟩)—even though, in order to produce the paradox, the Wigner’s friends *do* have to be measured in strange bases. But I don’t regard that as legitimate (“unperformed measurements have no results”)! So for me, anyway, the argument falls apart, *not* because of any of the assumptions that Frauchinger and Renner explicitly enumerate, but because of a background assumption that they barely even mention.

What about functional programming Scott, where programming is treated as ‘the evaluation of mathematical functions’? Seems like a pretty practical application of computability?

I’ve come up with a new definition of cognition. And I have a way to relate this definition to mathematics and computer science. Here’s my definition:

‘Cognition is the integration of two different modes of modeling , where abstract (high-level) and concrete (low-level) models are combined in order to generate procedural knowledge for achieving goals’

So I think at the fundamental root of a mind in the most general sense (Cognition), there’s 3 different modes of thought – abstract models (let us call this ‘Far’ mode), concrete models (let us call this ‘Near’ mode), and practical ‘how-to’ algorithms (let us call this ‘Procedural’ mode).

Near, Far and Procedural.

And then my ‘Interpretation’ for connecting (mapping) math to cog-sci is that any given domain of mathematics can be decomposed into these 3 categories, by ‘re-interpreting’ the given domain as ‘modes of cognition’. These modes of cognition are ‘levels of modeling abstraction’.

Now lets look at Computability based on the’ Geddes Interpretation’ ðŸ˜€

As I suggested in previous post, there seems to be a ladder of abstraction, where I can indeed split the field of computational logic into 3 levels. This is speculative (because frankly, I really doubt that *anyone* really groks what all this type theory/categories/verification business is about yet), but the ordering below seems intuitively very plausible:

Near Mode – Modal Logic

Procedural Mode – Functional Programming (and Type Theory?)

Far Mode – Formal Language Theory

So the idea is that the practical ‘how to’ stuff of functional programming (procedural mode) is somehow generated by integrating or *balancing* two dual theoretical models (near and far modes), which I’m guessing to be modal logic+formal language theory.

A neat way of looking at this is that there’s an ultimate aesthetic meta-principle of the ‘Golden Mean’, where all cognition is about trying to balance two opposites (a generalization of the mathematical notion of ‘duality’ perhaps?)

Then I can think of the 3 sub-fields above as a ‘see-saw’ with ‘Formal Language Theory’ at one end, and the opposite or dual ( ‘Modal Logic’ ) at the other end. And then ‘Functional Programming’ can be interpreted as the balancing of the two ends (‘the golden mean’).

See recent Scott Alexander story ‘In The Balance’, which is actually a good metaphor for what I’m proposing:

http://slatestarcodex.com/2018/09/12/in-the-balance/

So I’m making the big bold claim that all cognition is Near modes, Far modes and the ‘golden mean’ (balance) between them (Procedural Modes)!

If I’m right, then functional programming (and type theory) are somehow reducible to modal logic, as well as formal grammars.

A bit of quick research using Goggle does indeed unearth some previous attempts to create ‘Modal Type Theories’.

]]>Happy you weren’t arrested! ]]>