“Does anyone have a favorite write-up of the Friedberg-Muchnik proof for building these degrees and/or further developments in computability?”

The book Gems of Theoretical Computer Science has a proof of the Friedberg-Muchnik Theorem that I was eventually able to understand. Also, Richard Shore has a nice survey of more recent work. If you want to go deeper than that, you’re on your own, dude. ðŸ™‚

“Should complexity theorists be versed in advanced computability work?”

Sure, I guess! Ideally, they should also be versed in gardening, Lithuanian, and Mayan archaeology. ðŸ™‚

(Seriously: *Any* area of math can give you useful tools for complexity, and many of the basic notions of complexity, like diagonalization and oracles, come directly from computability. On the other hand, I suspect that most complexity theorists working today never learned computability beyond the undergrad level.)

It occurs to me that, if you buy the basic scheme I described, you can have the n-state machines of E2 simulate, instead of the nth E1 machine, some fixed machine M on input n. That way, if M accepts a language of intermediate Turing-degree, the last bit of BB for E2 will have the same degree.

Does anyone have a favorite write-up of the Friedberg-Muchnik proof for building these degrees and/or further developments in computability? Should complexity theorists be versed in advanced computability work?

]]>As long as you’re writing research papers, I’d recommend getting some real webspace (not blogger.com) to put them on.

You’re at Berkeley now?

]]>So, contrapositively, suppose a function f has no hard-core bits in the following sense: for any decision problem D(x) that is computable given the pair (x, f(x)), D(x) is computable given x alone.

Does it follow that f is computable?

See my page for the answer and generalizations.

PS Anyone know how to post a non-image file on blogger.com? I have an updated thesis I want to put up but don’t have home-accessible webspace. Thanks!

]]>Fix a standard enumeration E1 of input-free Turing machines; what if we specify another enumeration E2 by simply having all “n-state” E2 machines simulate the nth E1 machine, outputting 1 if it ever halts? If it does, BB(n) = 1 (mod 2) for BB defined relative to E2, otherwise it equals 0 (mod 2). Thus one can use

BB(n) mod 2 for E2 to solve the halting problem for E1.

If E2 seems unsatisfactory as an enumeration–e.g.,

i) BB(n) for E2 doesn’t have growth recursively related to BB(n) for other machine models),

ii) these E2 programs don’t output all strings;

iii) even if ii) is met, the descriptive complexity of strings relative to E2 should be close to descriptive complexity in E1;

I can suggest some fixes. (for i, use dovetailed simulations of E1 machines to generate large E2 outputs, while keeping the parity of 1’s of outputs chained to the nth machine’s halting behavior; for ii and iii, introduce a designated subset of ‘shorter-output’ E2 programs to generate all strings–with compression occuring–without affecting BB.)

What else is desired (other than that the enumeration be ‘non-contrived’)?

]]>This is of course mixing and matching busy beaver concepts (maximum runtime vs. maximum number of 1s), but it does show something about the importance of encodings.

]]>I do too. It’s much simpler conceptually (e.g. the proof that it outgrows any computable sequence is one or two sentences).

Of course, it’d be interesting to answer the odd/even question for either version. It’d *especially* be interesting if they had different answers…

Well, if you can solve the halting problem, you can compute the Busy Beaver sequence, so the two problems are *equivalent* (in terms of Turing degree).