= log(n^k) + O(1) + n^k,

though I’m not sure how that would be realized in practice on a Turing machine. ]]>

In it you state that PSPACE is contained within EXP, because “A machine with nk bits of memory can only go through 2n^k different configurations, before it either halts or else gets stuck in an infinite loop.”

Does this take into account (assuming the standard multi-tape Turing machine with a working string, read-only input, and write-only output) the fact that the head could be in any of nk positions, with m possible internal states, for a total of nk*m*2^(n^k) possible configurations? Or are you just assuming they are asymptotically close?

]]>However, from a pure logic setting, you can abstract away the computational infeasibility aspect entirely. For instance, I can just assume the existence of a perfectly secure Enc(k,m). I don’t have to worry about implementing its security using a computationally hard problem. I only care about how I can preserve security with its usage. This is what we happens in BAN logic and CPL.

@58: For me it is just a theoretical or philosophical inquiry. Even given the silly premises of these sci-fi scenarios, can ideas from cryptography and information security help us? I think the answer is an obvious yes.

Recently, I researched various logics for cryptographic protocols. I noticed that they indirectly assumed a lot about the nature of information/randomness. Typical protocol models admit a close connection between randomness and time. Algorithmic randomness is typically understood using sequential forms of computation. Randomness defined using notions of unpredictability require that past “events” not yield information about a future event (such as a random number generator). Steps for executing protocols are done sequentially (sometimes in parallel). Though, all of these assume that there is a linear order on states/events. I want to know if we can generalize or spice up these concepts to handle more elaborate situations.

For further clarification, I’m only considering the basic logic surrounding the problem. I am ignoring the laws of modern physics for the moment. So instead of thinking about the possibilities permissible by quantum mechanics, I am thinking about what is possible under variations of LTL (linear temporal logic). By “nonlinear time”, I am referring to temporal logics that assume a non-sequential structure to time. I am not suggesting that such a thing is sensible in the real world. It is just a thought experiment.

]]>Hi Mike, just out of curiosity what is (are ) the motivation (s) for your question, are you a sci-fi writer looking for plausible story lines or is the question motivated by pure scientific inquiry? If the latter, there are several problems with some of the concepts you are alluding to like “time travellers”; we don’t even know if such a phenomenon is even possible (is it even “allowed” by the laws of quantum mechanics as we know them)? What do you mean by “nonlinear time”, can you relate this notion to some known aspect of general relativity? ]]>

The flip side, of course, is that in cryptography you have the freedom to design your PRG however you like—the more structureless, the better—whereas in math, you’re trying to prove pseudorandom behavior in particular, “structured” objects (such as the set of prime numbers).

Having said that, for many applications in theoretical computer science (e.g., Valiant-Vazirani, approximate counting, error-correcting codes, extractors and condensers), we too only need the weaker kinds of pseudorandomness—the kinds that can be shown to exist unconditionally using present tools! And largely because of that, there’s been an extremely rich interaction over the last decade between CS theorists and mathematicians about pseudorandomness (especially regarding additive combinatorics, and its applications to randomness extractors). Avi Wigderson, Terry Tao, and Timothy Gowers have all written eloquently about this story, and I’m not going to do it justice in a blog comment.

]]>