I don’t follow your argument — I don’t see how it will cover *every* sufficiently high n, not just infinitely many n, given that BB’s growth might come in “spurts”.

On the other hand, I now see that my specific worry (what happens if for infinitely many n, BB(n-1) is only 3 less than BB(n)) makes things *easier* for your conjecture #33, not harder as I had for some reason been assuming. Therefore, it’s now plausible to me that if I thought about it a bit more, I’d agree with you that it’s clearly stronger. Still, if you can turn your argument into a proof (of #31 from #33, interpreting both as applying to “every sufficiently high n”), then I’d understand much better why it’s stronger.

]]>To prove BLB(n+2) > BLB(n), use the proof in the survey of BB(n+2) ≥ (n+2)/n BB(n), which works by picking one state (the most popular) and introducing a new 2-state 2-step time waster before it (move left, move right).

In this case we can’t necessarily pick the most popular state, since it might be the starting state, and that would mean leaving the tape blank for a couple steps at the start — not allowed (if I understand BLB correctly). So we pick any other state which is transitioned to at least once when the tape is not yet blank again. Thus we weaken the conclusion (probably not as much as what I stated, but I’m too lazy to figure out how much stronger it can still be).

Maybe we actually *could* pick the starting state, provided the starting state itself was not changed — only the explicit transitions to the starting state would go to the new time-waster instead. Maybe we could rescue the original conclusion that way? I’m too lazy to check that, too (perhaps because it’s late).

I don’t agree with this:

BBB(n + 1) > BBB(n) for sufficiently large n should follow from the monotonicity of BB and the fact that BBB dominates BB.

For example, define X(n) such that X(2n) = BB(2n) + 1 and X(2n-1) = X(2n). Then X dominates BB but X does not always grow.

]]>This is true, but I think this is a symptom of looking under the lamp post because that’s where the light is. These simplifications work, not because many things are simplifiable, but because our limited mental machinery can only deal with things that are simplifiable, and then we keep those where the simplifications work.

I think this is true in many fields. Only a infinitesimal fraction of numbers are integers, or rational, or algebraic. But almost all the numbers we use are, since these are the ones we can think about. Almost all functions are exponentially hard to compute, but we only understand the ones that are O(N^some-low-power). Many of these do something useful, but they are not representative.

This leads to the only argument I’ve seen that argues why it is possible that P=NP. The analogy is polynomials with integer solutions. For small number of terms, and small powers, we understand how these work, and intuitively expect that higher powers are more complicated, but fundamentally similar. But as the solution to Hilbert’s 10th problem shows, starting at some higher power (I think the proof used 13th powers), solving some of these equations is equivalent to solving the halting problem. So in some sense 13th power polynomials can express an entirely different class of problems compare to lower order polynomials. The argument that P might equal NP then holds that something similar might happen with algorithms. We have lots of understanding and tools for low order algorithms, such as O(N^2) or O(N^3). But perhaps, similar to polynomials, there is some much larger (but finite) exponent where the system becomes capable of addressing harder problems. But we can’t see this since our experience and intuition is all in easier-to-understand cases.

]]>So how about this for a sequence? Computable Beeping Busy Beaver (CBBB): the longest a program can run (started on the blank tape) before provably quasihalting. CBBB puts an upper bound on the values of BBB that we could ever in our wildest fever dreams possibly hope to obtain.

What is the complexity of this sequence? Delta_2?

]]>That explains why this one takes off earlier, but it doesn’t explain why it takes off *so much* earlier.

The first few of these programs that turned up quasihalted between 2,000 and 3,000 steps. That wasn’t exactly shocking, but it was like, alright alright, not bad, that extra transition really gives it some juice. Then a few months later I found one that quasihalted in 66,349 steps. That number was already pretty shocking, and I wasn’t expecting to find something like that.

But still, it wasn’t even close to 47,176,870. Why not? I reasoned: a program with n + 1 states doesn’t just have more transitions than a program with n states, it also has *more expressive* transitions. The 5-state transitions have more choices available to them than their 4-state counterparts, and that lets them do more.

Imagine you worked at a software company where arbitrary `goto`

statements were permitted, but only to four specific targets. Now imagine that company policy was changed to allow jumps to five specific targets. The Busy Beaver function tells us that pandemonium would ensue.

Well, the Beeping Busy Beaver function apparently tells us that the company policy of four jump targets would already cause pandemonium, and that company policy should therefore cap the jump targets at three. That’s what’s new here. Is there an explanation for why it should be this early? (Other than, it has to be somewhere, and here it is.)

]]>Define the Mediocre Beaver function \(MB(n)\) as the smallest number such that at least 50% of halting (essentially different) n-state Turing machines halt in \(\le MB(n)\) steps

There are different versions of the Mediocre Beaver function depending how we count Turing machines that are “essentially the same” as define in “The Busy Beaver Frontier”. I have mostly though about the version where we count essentially different Turing machines but everything in this comment probably applies to both mutatis mutandis.

Can MB(n) be dominated by a compatible function?

I’m a useless physicist so after much thought couldn’t answer this. Also I’m not sure I have any intuition either way. But if so it would be fascinating to have an explicit example.

I made some progress on the a similar question

Is MB(n) computable?

I suspect the answer is no but I couldn’t quite prove it.

My general strategy was like this:

If we can compute MB(n) then we can simulate all n-state Turing machines and count how many essentially different Turing machines halt in MB(n) or fewer, steps call this N. Now the obvious thing to do is to try and compute BB(n) by simulating n-state Turing machines until 2N of them halt. The problem with that is that multiple essentially different Turing machines might take MB(n) steps to halt and so more the 50% of nstate Turing machines half in N or fewer steps. And so 2N might over estimate the number of essentially different Turing machines that halt.

Let DH(n) be the number of essentially different Turing machines that halt. If 2N only over estimated DH(n) by a “small” amount (say \(2N-DH(n)=O(n^k)\) for some k) then I think this kind of argument could be saved. But I’m not sure how to show that or even if it is reasonable.

]]>bb(n) is the maximum runtime of a halting Turing Machine with binary alphabet and n non-halt transitions.

It’s easy to see that bb(n-1) ≤ BB(n) ≤ bb(2n-1). I could verify so far:

bb(0) = 1

bb(1) = 2

bb(2) = 4

bb(3) = 6

bb(4) ≥ 17

bb(5) ≥ 21

bb(6) ≥ 38

bb(7) ≥ 107

bb(8) ≥ 583

Per Marxen and Buntrock’s BB(5) bound we have:

bb(9) ≥ 47176870

I’m trying to learn more about bb(8).

After all the one halting state is the least complex state imaginable. It occurs at most once, it has no outgoing transitions, and even I perfectly understand it.

If you remove it, you have a TM with one less state that does exactly the same steps before continuing on.

This easily explains differing liftoffs.

]]>