I did not come here to discredit Wegner. I loved the book he wrote in the early 1970’s with the sections on SECD and lambda calculus. Perhaps all Wegner is saying is that it is good to look at concurrent programming – if so many would heartily agree with him – presumably this is what Dijkstra and Wirth were saying in 1970, writing on cooperating sequential processes and also what Ken Thompson was saying in 1970, in inventing Unix.

At present it seems a lot more important for me to read the Davis papers than to puzzle over papers by Wegner.

Perhaps pedagogically there is something wrong with much of the presentation of Turing machine material. If Wegner has difficulties with it, how much more so will the general reader. So Wegner’s rants may ultimately have a beneficial effect in encouraging people to restate results in a better way.

]]>“Timing Toast”, by Piet Hein:

There’s an art of knowing when.

Never try to guess.

Toast until it smokes and then

twenty seconds less.

This sounds really, really doubtful to me. I knew Iverson reasonably well and worked with him at the IBM Philadelphia Scientific Center. He knew an awful lot of mathematics and computing, and I’m absolutely sure he knew about Turing.

Iverson was concerned with notation and expressibility. The claim above has all the hallmarks of a misunderstanding that has been repeated multiple times and changed in the telling.

]]>Certain programs have the property that they manipulate their input data in a “type-safe” way. That is, if the encoding of the input data is changed, then the program will continue to run correctly as long as the data is accessed through the proper channels. This is an extremely important property in practice, and one of the major focuses of high-level language design is ensuring type safety. In fact, I would say one of the most important things that a program expresses, perhaps more important even than its semantics, is the interaction of its data structures.

In a Turing machine, all the data accesses are “flattened out” to pure bit manipulation. In this way, a Turing machine can never be type safe, because it lacks the formalism to specify which data accesses are properly controlled.

This is an example of a important aspect of computation that can be expressed only in high-level languages, not Turing machines.

]]>What do you people think about nondiscrete models of computation as possible counterexamples to the Church-Turing thesis?

http://edelstein.huji.ac.il/staff/shagrir/papers/physical_computation.pdf

(section 3)

I remember other papers but cannot find the references.

I think the Church-Turing thesis for those models is that continuous computation modulo finite precision measurement and setup of computing apparati is equivalent to classical computation.

And do you think that continuous models of computation with beyond-Turing capability (before imposing feasibility assumptions) are useful?

Any reference?

]]>“Common-sense must be used here to avoid embarking on an over-axiomatized, and hence misguided, piece of theoretical physics. We [\hellip;] should not be trapped into axiomatizing theoretical ideas out of existence.”

— Christopher Isham (1984)

(quoted in Matthias Blau’s popular on-line lecture notes *“Symplectic geometry and geometric quantization”* (1992).

Isham’s remark descends from

“If one finds a difficulty in a calculation which is otherwise quite convincing, one should not push the difficulty away; one should rather try to make it the centre of the whole thing.”

— Werner Heisenberg (1971)

(which a Google search finds quoted in many places)

Here the point is that a great deal of heat commonly is generated — especially in the blogosphere — when postulates are mistaken for and/or claimed to be deductions. And the rhetorical motivation for such claims is evident: the correctness of deductions can be verified by more-or-less irrefutable reasoning … the correctness of *postulates* is susceptible to no such checks!

Testbooks on complexity theory (example: Arora and Barak *Computational Complexity: a Modern Approach*) and books on quantum theory (example: Nielsen and Chuang *Quantum Computation and Quantum Information*) generically begin with a statement of postulates. Three questions then are natural.

**Q1** What is the likelihood that the *reasoning* of standard textbooks is globally and seriously flawed? Both common-sense and history indicate that this likelihood is vanishingly small.

**Q2** What is the likelihood that, in coming decades, the *postulates* of 20th century textbooks will be appreciated as mathematically limiting and/or physically unrealistic? Both common-sense and history indicate that this likelihood is sufficiently great as to amount to a near-certainty.

**Q3** What new and/or amended postulates, regarding complexity theory and/or quantum dynamics, will the great textbooks of the 21st century embrace? We are very unlikely to collectively appreciate the answer to this question … until someone writes these textbooks. ðŸ™‚

It is a pretty safe bet — if we are guided by Isham and Heisenberg — that the amended complexity/quantum theoretic postulates of the 21st century will originate in physical concepts rather than reasoning from axioms (Isham’s principle) and that these postulates will embrace as central, difficulties that at present are regarded as peripheral (Heisenberg’s principle).

**Conclusion** Should we be skeptical of deductions that follow from the CT thesis? Reasonably, no. Should we be skeptical that the CT thesis provides uniquely optimal foundations for complexity theory? Reasonably, yes.

I recall my experience of learning formal logic. Prior to my university education, I would never have dreamed or imagined that natural language could be given a mathematical semantics. Learning formal logic was one of the most tedious experiences of my sophomore year of university and to date, I think the presentation of a lot of logic texts does not help this state of affairs. Still, it was completely amazing that the difference between “some child likes all chocolates” and “all chocolates are liked by some child” can be turned into a mathematical theorem. Even more amazing was the realisation that it was, in principle, possible to codify the language of mathematical theorems and proofs entirely in logical terms. This universality lead to the mind-blowing results of Goedel. Mind-blowing for me, not because of their content, but because it was possible at all to make statements that were about an entire area of mathematics, and not just about specific objects.

That was cool, but it is infeasible and feels rather inhuman to work purely with logical symbols (though I have observed people doing that, including a few programming language researchers). It is more comfortable for most working mathematicians and computer scientists to work with natural language, while being aware of the formalist underpinnings.

Moreover, once I understood better, the results of Goedel didn’t seem as universal. Gentzen for example did prove a completeness result.

The second example is about the independence of the continuum hypothesis from the Axiom of Choice. I was fascinated to learn that there were models of set theory that could be constructed using forcing. This in turn led to the realisation that the ZF and ZFC axioms for set-theory are axioms like any other and the universe of sets we work in in day-to-day mathematics is a model of a set of axioms, like any other model. There are many axiom systems possible and many models.

Here’s the point of the reminiscence. Learning formal logic was a bit tedious but it felt like a rite of passage to a world where I could continue to do mathematics but with a far deeper understanding of the internal structure of that mathematics. The same applies to set-theory. At some point I realised that the kind of sets and operations on sets that we use are just specific models and there are many more possible and alternative interpretations of the membership predicate, for example. Once I knew better, the results I found mind-blowing initially remained fascinating, but definitely seemed less universal.

In a discussion of these areas and results, it is not clear to me how much this matters. Turing machines and the Church-Turing thesis feel similar to me. Learning to reduce between computational models and about the Church-Turing thesis feels like a rite of passage and one that reveals something deep and fascinating about computation. But the Church-Turing thesis also has a context, which from a specific perspective is not as universal as it may have first seemed.

And here’s where I feel at a bit of a loss. In the case of certain logical theorems, or independence results in set theory, or the Church-Turing thesis, there is a context in which those results are true and for most computer scientists that context is always in the background, but for some it is not. Does anyone see what I’m trying to say?

]]>