My Favorite Growth Rates

Update (8/17): Believe it or not, this blog actually led to something (scroll down to comment #52 if the link doesn’t work).

105 Responses to “My Favorite Growth Rates”

  1. KWRegan Says:

    From what I know and hear, one of the growth rates arises only from current upper-bound knowledge where little credence is given to its possibly being tight.

    Another comes only from a problem where matching lower bounds are known.

    (By “only” I mean that I’ve not seen and would have a hard time imagining an equally-nice case where the bound arises. That said, finding which two growth rates I’m referring to is a little quiz…)

    I presume that “const” does not count as a “growth” rate—after Ackermann and Busy-Beaver, it’s inverse growth rate would be “Diverges” :-).

  2. Scott Says:

    I presume that “const” does not count as a “growth” rate

    I thought about it, and the answer is no. 🙂

  3. harrison Says:

    What about n^(12+\epsilon) (the complexity of primality testing an n-digit number under AKS?) Or n logn loglogn (the Schonhage-Strassen multiplication algorithm?)

    Clearly your policies for selecting growth rates are blatantly discriminatory, and I refuse to support them.

    Also: can you exhibit something with an actual growth rate which is the inverse of the Busy Beaver function (or, for that matter, of any non-computable function?) Apologies if this is a stupid question; I’ve just never seen anything like that (although my experience is admittedly much more limited than yours.)

  4. Scott Says:

    Harrison: AKS was improved by Lenstra and Pomerance to ~O(n6), and the truth is almost certainly lower.

    n logn loglogn? Blech!

    Sure, I can exhibit something with growth rate BB-1(n). Consider the computable language L consisting of all pairs ⟨M,T⟩, where M is a Turing machine and T is a transcript of M’s execution. Then given x∈L∩{0,1}n, can we lower-bound how many bits of x are taken up by M? Yes, by BB-1(n). 🙂

  5. KWRegan Says:

    Besides “Blech!”, the “n logn loglogn” got improved to O(n logn 2^{log* n}) by Martin Fürer at STOC’07. Is that blechier?

    That’s the point of my “little quiz”: a growth rate associated with a single computational problem should be in the “Platonic Pantheon” only if the bound is tight—even if the problem is, well, really important.

    BTW, Google provides pretty good support for my “only”—by finding a previous entry in this blog as top hit for a wordless search!

  6. Scott Says:

    Is that blechier?

    No, because it’s much more practical. 🙂

    a growth rate associated with a single computational problem should be in the “Platonic Pantheon” only if the bound is tight

    Or if it’s just really cool, like nφ.

  7. HN Says:

    Scott,

    Size of superconcentrators of depth 2: n log^2 n / log log n
    Size of superconcentrators of depth 2d, 2d+1: n A^{-1}(d,n), for d>=2.

    Are you preparing lectures for some sort of introductory algorithms course?

  8. Scott Says:

    Are you preparing lectures for some sort of introductory algorithms course?

    Fortunately, no.

  9. Gil Says:

    Dear Scott, can you say more about
    f: f(f(n))=2^n
    looks precisely in-betweeb the quasi-qiasi-qiasi … polynomial and the sub-sub-sub… exponential functions.

    (Once I managed to improve an exponential upper bound
    from 2^n to 2^(n^1/2) then for a short time to 2^(2^(log n)^3/4; I expected that the next improvement will be to exp (exp (exp (log log n)^3/4) but it went all the way to
    n^logn where it is stucked.)

  10. rrtucci Says:

    Ugly. Too many. Shaker simplicity out the door. Also, no taxonomy or classification theorem.

  11. rrtucci Says:

    Even a sea shell collector has a rudimentary taxonomy (cowries, whelks, etc).

  12. Scott Says:

    can you say more about f: f(f(n))=2^n

    Thanks for asking, Gil! That one’s actually my favorite growth rate of all. When I was 12 or 13, I wasted several weeks trying to find an explicit f such that f(f(n)) would be exponential. Well, it turns out that such functions exist — one can give iterative processes that converge to them — but they have no analytic expression.

    But there’s the best part: these “half-exponential” functions actually arise in complexity theory! In particular, the best circuit-size lower bound we can currently prove for the classes PEXP, MAEXP, and Σ2EXP is half-exponential. (For more, see this paper by Miltersen, Vinodchandran, and Watanabe.) Also, as pointed out in the Razborov-Rudich paper, there can be no natural proof that Factoring and Discrete Log require circuits of more than half-exponential size. For that’s the point where your lower bound would start to be greater than the upper bound that followed from breaking PRG’s!

  13. Scott Says:

    rr: Here’s a classification theorem, just for you.

    THEOREM: Every growth rate is either
    (i) sub-polylogarithmic,
    (ii) polylogarithmic,
    (iii) between polylogarithmic and polynomial,
    (iv) polynomial,
    (v) between polynomial and exponential,
    (vi) exponential, or
    (vii) super-exponential.

    PROOF: Let me know if you’re having trouble.

  14. rrtucci Says:

    Sorry Scott. I was just trying to yuck your yummy.

  15. rrtucci Says:

    Maybe you should separate them into those 7 categories. 7 is good: 7 seas, 7 wonders, 7 deadly sins, …

  16. Gil Says:

    Thanks, Scott. (Very impressive project for a teenage.)

    I wonder if such half exponential functions were considered early on in math and even more if such a growth behavior
    can occur naturally in some problems of mathematics. (It looks very unjust if they don’t.)

    I vaguely remember (but maybe I am wrong) that I was told that such a function cannot be analytical in a neighborhood of the positive real numbers.

  17. Cheshire Cat Says:

    Does n^{A^{-1}(n)} occur naturally in some context?

  18. Raoul Ohio Says:

    I have also wasted some time dreaming up functions between the polynomials and exponentials.

    My guess is that that there are many functions such that f(f(x)) = exp (x), so you would need some criteria for the “best” one, rather like the gamma function is the “best” generalization of the factorial function due to convexity or concavity.

    Does anyone have a clue about why the half exponential function might not be analytical in a neighborhood of the positive real numbers? Assuming it could be expanded in a power series leads to an interesting set of equations for the coefficients.

  19. Scott Says:

    Cheshire Cat: It occurs in this paper by Ben-David and Halevi, but only in the context that they’re able to prove things for any computable superpolynomial function, no matter how small.

  20. Cheshire Cat Says:

    Thanks, Scott, I’ve heard about that paper, but haven’t actually read it. Now here’s a reason 🙂

  21. Douglas Knight Says:

    classification?
    Maybe I don’t know what a “growth rate” is, but there are incomparable monotone functions. For example, there’s are function infinitely often less than n and infinitely often greater than 2^n. I guess that falls in (v), but more exotic examples don’t.

  22. Scott Says:

    Douglas: Thanks for bringing that up!

    For the purposes of my classification theorem, we can make the following definitions.

    sub-polylogarithmic = o(logεn) for all ε>0
    polylogarithmic = O(polylog(n)) and not o(logεn) for all ε>0
    between polylogarithmic and polynomial = o(nε) for all ε>0 and not O(polylog(n))
    polynomial = O(poly(n)) and not o(nε) for all ε>0
    between polynomial and exponential = o(2n^ε) for all ε>0 and not O(poly(n))
    exponential = O(2poly(n)) and not o(2n^ε) for all ε>0
    super-exponential = not O(2poly(n))

    In that case we can hopefully agree that the theorem is true. 🙂 Furthermore, I think it captures what we usually care about in complexity — namely the worst case.

    Of course, for this to work I’m using the fact that the growth rates in the classification are themselves totally ordered: anything o(logεn) for all ε>0 is also O(polylog(n)), anything O(polylog(n)) is also O(poly(n)), and so on.

    Without that “scaffolding” of “natural” growth rates, I would indeed be in the mess you describe with incomparable monotone functions.

    Question: Does anyone know a natural way to define the concept of “growth rate,” in such a way that the growth rates are totally ordered?

  23. Dani Fong Says:

    Scott: Well, there need to be some changes, since one can postulate functions F and G where F is never in O(G) and G is never in O(F).

    A stupid example: F = e^(x + sin(x)) G = e^(x + cos(x)). (No such beasts arise in practice though, right, oh Zookeeper?)

    Monotone, and resistant to analysis. But intuitively we’d say they’re kinda the same, right? Just out of phase, kind of. There are an infinite number of crossings.

    Here’s the idea: given two functions, identify the last point at which they cross (under the same constant twiddling provisions as in normal big-o notation). If there is no last point, the classes are the same. Otherwise, the one which has the highest value has a greater complexity class.

    Now, the problem is that the ‘last point’, hereafter referred to as limit, may be uncomputable (should be fairly easy to encode that issue), but so long as the functions exist everywhere, so must the limit either exist or not exist, computable or not. (right?)

    So then we have a well ordered set, even though the precise order may be uncomputable.

  24. Paul Beame Says:

    My favorite growth rate isn’t on your list:
    sqrt(log n/loglog n)

    It is derived from t^{t^2}=n and is the optimal query bound for some natural data structure problems. I particularly like it because we had a paper showing the lower bound rejected because it was a bit ugly and with a bound like that it couldn’t possibly be proving an optimal result. Since then, I’ve seen it a crop up more than once.

  25. Scott Says:

    Dani, here’s the problem with your proposal. Let f(n)=2n, and let g(n) be a function that infinitely often grows like n and infinitely often like 3n. Do we really want to identify their orders of growth as “the same”? If so, then by the same argument we should identify g(n) with n2. Hence, by transitivity, 2n has the same order of growth as n2.

    Incidentally, in the example that you gave, F=O(G) and G=O(F), since the fluctuations in sin and cos never make F and G more than a universal constant factor away from each other (namely esqrt(2)).

  26. Scott Says:

    OK, another problem I’ll throw out there: can anyone prove that, if f is any function composed from basic arithmetic operations, exponentials, and logs, then f(f(n)) grows either faster or slower than exponentially?

  27. Dani Fong Says:

    Scott:

    Oh. Oh right. I forgot to look for that type of case. The thoughts I miss at 3 in the morning. Hrrrmmmm…

    The universal constant factor bit is another embarrassment, though e^(x + x*sin(x)) and e^(x + x*cos(x)) should (uninterestingly) do the trick.

    My initial message was voicing my doubts as to your question’s solubility, but I have only vague ideas of how to show that, since “naturalness” is a vague term to begin with.

    Why not take the route of Arrow’s theorem and start listing some properties you take to be natural?

  28. James Cook Says:

    Gil: can you say more about f: f(f(n))=2^n

    Scott: Thanks for asking, Gil! That one’s actually my favorite growth rate of all. When I was 12 or 13, I wasted several weeks trying to find an explicit f such that f(f(n)) would be exponential. Well, it turns out that such functions exist — one can give iterative processes that converge to them — but they have no analytic expression.

    Well, great minds do think alike! 🙂 This was actually the problem that got me into mathematics, at the very same age. (Though I “wasted” more than a year on it.) It led me to discover the fantastically exotic concept of “infinite-dimensional spaces” (you know, the things that Scott is always dissing every chance he gets), because I wanted to find a setting in which I could study maps like f –> f o f and its “inverse”.

    Anyway, those who are interested in this topic might start here and here.

  29. James Cook Says:

    Scott:

    OK, another problem I’ll throw out there: can anyone prove that, if f is any function composed from basic arithmetic operations, exponentials, and logs, then f(f(n)) grows either faster or slower than exponentially?

    I’m not sure I understand the question: let f(n) = exp(n). Then f(f(n)) grows faster than exponentially. On the other hand, let g(n) = log(n). Then g(g(n)) grows slower than exponentially.

    Are you asking to find a general method for comparing the growth rate of an arbitrary such f to that of the exponential function?

  30. Scott Says:

    James: In case you missed it, I was asking for a proof that for every such f, either f(f(n)) is subexponential or else it’s superexponential. I know there exist f’s putting f(f(n)) into both classes. 🙂

  31. James Cook Says:

    Ah, okay I see: in other words there are no such f that are uncomparable to the exponential function. (Which fits in with the previous discussion — sorry, I should have gotten that!)

  32. James Cook Says:

    That should be: no such f such that f o f is uncomparable…

  33. anonymous Says:

    James, I think he’s asking for a proof that there is no f composed from basic arithmetic operations, exponentials, and logs, such that f(f(n)) is exponential.

  34. harrison Says:

    “a growth rate associated with a single computational problem should be in the ‘Platonic Pantheon’ only if the bound is tight”

    So what’s with the 2.376 exponent?

  35. James Cook Says:

    Okay, at the risk of continuing to flaunt my doofosity, let me see if I have this right:

    Proposition: If f:N -> R is a composition of (finitely many, I presume) “elementary” functions (as defined above), then the function fof (the composition of f with itself) lies either strictly above or strictly below “exponential” on the growth rate scale.

    Have I got it?

  36. Scott Says:

    Yes. 🙂

  37. Louis Wasserman Says:

    I don’t see n^n. I’m pretty sure there’s a problem with that complexity, but I can’t think of any off the top o’ me head.

  38. Scott Says:

    Louis: To a complexity theorist, nn is basically indistinguishable from n!

  39. Scott Says:

    Harrison: The 2.376 honors the Coppersmith-Winograd matrix multiplication algorithm, which most of us believe will be superseded but which hasn’t been for almost 20 years.

  40. cody Says:

    ive been wondering about that, why is it conjectured that there is “no fastest algorithm for multiplication”? there must be some sort of upper bound on it, right? maybe i should take a course in computational complexity before i ask this question?

  41. Scott Says:

    Cody: For integer multiplication I don’t know what the conjecture is; in any case we can already do it in extremely close to n log n time.

    For matrix multiplication, it could be (for example) that for every ε>0, there exists an algorithm to multiply matrices in time O(n2+ε), and yet there’s no O(n2) algorithm. We simply have no idea.

    What we know, from the Blum Speedup Theorem, is that there exist computational problems that have no fastest algorithm — even problems for which, if there’s an O(f(n)) algorithm, then there’s also an O(log f(n)) algorithm. But the problems constructed in the Blum Speedup Theorem are artificial, and to my knowledge no “natural” problem has been proved to exhibit this behavior.

  42. MattF Says:

    Maybe you could look at growth rates ‘algebraically’, i.e., generate new ones with sums, products, ratios, of old ones– I can see that you’d then have to include a constant growth rate as an identity, for example…

  43. Klas M. Says:

    With some additional assumptions Ran Raz has proven an (n^2)log(n) lower bound for the number of arithmetical operations needed to multiply two n times n matrices.
    The added assumption is that the circuit realizing the product does not perform multiplication with constants which are too large compared to n.

    Lower bound

  44. David Speyer Says:

    Maybe everyone already knows this, but it is a theorem of G.H. Hardy that, if F is any function composed of basic arithmetic operations, exponentials, and logs of eventually positive functions, then F is eventually positive, eventually negative, or zero. (So, for example, sin x can not be expressed in terms of arithmetic operations, exponentials, and logs of eventually positive functions.) So it always makes sense to ask, for such functions G and H, whether G or H has a higher growth rate, meaning whether log G – log H is eventually positive, eventually negative or zero. I think that the citation is Orders of Infinity, but I don’t have the book available to check.

    I’ve never looked into the proof, but it is possible that Hardy’s techniques would make it clear how to prove that f(f(n)) is never exponential.

  45. Scott Says:

    Thanks, David! That result of GHH sounds beautiful; I’d never heard it before.

  46. KWRegan Says:

    Harrison: The 2.376 honors the Coppersmith-Winograd matrix multiplication algorithm, which most of us believe will be superseded but which hasn’t been for almost 20 years.

    That’s the first answer to my “little quiz” in comment #1. For the second, simply

    Google n^0.753

    as-is, no quotes! The top hit is Scott’s item on Farhi-Goldstone-Gutman. None of the next 28 looks algorithmic at all, then comes a PDF of Childs-Cleve-Jordan-Yeung. As Scott mentions there, for classical randomized algorithms the bound is tight.

    A novel use of Google for research? 🙂 Also, does anyone know whether if you click on a hit after a search, Google reads the click and uses it in its ranking criteria for that page or search?

  47. lylebot Says:

    Google will never say, but as someone who has some experience with search companies, I strongly suspect that they do use clicks to train their ranking algorithm. However, I’m sure they do a lot of preprocessing—raw clicks are strongly biased to the top of the list.

  48. Bram Cohen Says:

    Related to demonstrating that the half-exponential function is non-analytic, does anybody have any idea what its Laplace transform looks like?

  49. Thomas S. Says:

    OK, another problem I’ll throw out there: can anyone prove that, if f is any function composed from basic arithmetic operations, exponentials, and logs, then f(f(n)) grows either faster or slower than exponentially?

    I wonder if this goes far enough. Are there even any such elementary functions that are strictly between polynomial and exponential? Such an f(n) such that

    f(n) ∊ Ω(n^m) for all m∊ℕ
    f(n) ∊ O((1+ε)^(n^ε)) for all ε>0, m∊ℕ

    I can’t think of any examples offhand, and I’d like to guess they don’t exist. asymptotic half-exponentials would be a special case.

  50. Scott Says:

    Thomas S.: nlog n

  51. Thomas S. Says:

    d’oh!

  52. Scott Says:

    Alright folks … recall I asked for a proof that, if f is any function composed of basic arithmetic operations, exponentials, and logs, then f(f(n)) is either subexponential or superexponential.

    I’ve now thought about it a bit, and I think it’s a trivial (in retrospect 🙂 ) induction on the complexity of the formula. Actually, I still don’t know how to do it if you allow division and subtraction, but I can do it assuming only addition, multiplication, exponentials, and logs.

    Say a function f is in “class 0” if f(f(n)) is greater than polylog(n) and less than 2n^ε. (In other words, if f is more than half-logarithmic and less than half-exponential.)

    Likewise, f is in “class 1” if f(f(n)) is greater than exponential and less than double-exponential, “class 2” if f(f(n)) is greater than double-exponential and less than triple-exponential, etc.

    In the other direction, f is in “class -1” if f(f(n)) is greater than polyloglog and less than polylog, “class -2” if f(f(n)) is greater than polylogloglog and less than polyloglog, etc.

    CLAIM: Every nonnegative function f:R→R composed from addition, multiplication, exp’s, log’s, nonnegative constants, and n belongs to one of the above classes.

    (Technical note: here we won’t allow expressions of the form f(n)g(n). But that’s OK, since such expressions can be simulated as exp(g(n) log(f(n))).)

    Proof: By induction on the number of operations T in the expression for f.

    Base case. When T=0, the hypothesis clearly holds for f(n)=n, f(n)=1, etc.

    Induction step. Suppose it’s true for T. Then to get a formula of complexity T+1, we can either

    (i) Exponentiate a formula of complexity ≤T,
    (ii) Log a formula of complexity ≤T, or
    (iii) Add or multiply two formulas of complexity ≤T.

    Now we simply need to check that none of these operations violate the induction hypothesis. Exponentiating moves us up a class: from class -1 to class 0, from 0 to 1, etc. Likewise, taking the log moves us down a class. And adding or multiplying f and g gives us a function whose class is max{class(f),class(g)}. QED.

    The above claim immediately implies what I wanted to prove — namely, that you can’t get the half-exponential function.

    What I love about this — and this gets back to R.R. Tucci’s comment — is that it gives a completely different perspective on the “classification of growth rates.” Namely, almost all the growth rates we ever talk about belong to tiny little discrete islands (class 0, class 1, etc.), separated by vast seas of exponentiality! And it’s only when talking (for example) about half-exponential functions that we ever venture into those seas. The rest of the time we simply hop over the seas by taking logs and exponentials. (Indeed, there’s a strong analogy here to the power set operation in set theory.)

    Now I’m wondering: can G.H. Hardy’s theorem (mentioned in comment #44 by David Speyer) also be proved using this sort of argument? And of course, how do we handle subtraction and division?

  53. John Sidles Says:

    Scott, is my understanding of your argument correct that—rather like the Clifford group sparsely samples Hilbert space—the group of elementary functions and their compositions sparsely samples function space?

    Thus the goal of analyzing algorithmic complexity via functional bounds might somewhat resemble the goal of building a quantum computer out of Clifford gates … no can do … so alternative approaches are required.

    If so, the method of your proof provides a very nice counterexample to “Wovon man nicht sprechen kann, darüber muss man schweigen” (Tractatus, 85). Doh!

    Who would have thought Wittgenstein’s remark might apply to complexity bounds? 🙂

  54. Nagesh Adluru Says:

    Well, it turns out that such functions exist — one can give iterative processes that converge to them — but they have no analytic expression.

    Scott, do you mean they have no “closed form”? Is it possible to express such functions as power series?

  55. Scott Says:

    I don’t know within what interval a power series for a half-exponential function would be convergent — good question!

  56. Raoul Ohio Says:

    So the half exponential is not a finite composition of addition, multiplication, exponentials, and logs. Likewise, exponentials are not a finite composition of additions and multiplications, but we still use them.

    Here is a slightly related comment sometimes seen in math books:

    The set of continuous functions has measure zero in L2, so why bother studying them?

    One answer is that the majority of the functions of interest to those who use calculus and functional analysis are in fact continuous.

  57. Nagesh Adluru Says:

    Scott your proof for the subexponential/superexponential result is for fs which have closed-forms. Is it extensible to fs which have no closed-form (analytic expression)? Does this have to do with allowing division and subtraction in the composition?

  58. Scott Says:

    Nagesh: We know that half-exponential functions exist, so obviously the proof couldn’t be extensible to functions with no closed form.

  59. Thomas S. Says:

    What about 1/n-th exponentials, functions f(x) such that f composed with itself n times (f ⎔ f ⎔ … ⎔ f)(x) is e^x or maybe Θ(e^x)? And generalizing on, m/n-th exponentials, which are 1/n-th exponentials composed with themselves m times? You can get an interesting (?) lattice of complexity functions between polynomial and exponential that way, one for each rational in [0,1].

    It’s easy to show that 1/n-th exponentials exist. Start with any real numbers 1 any monotone increasing function f: [1, a_(n-1)] → [a_1, e] satisfying the constraint f: a_i ↦ a_(i+1). Then the defining relation (f ⎔ f ⎔ … ⎔ f)(x) = e^x uniquely extends this to a 1/n-th exponential on ℝ.

    Also, m/n-th exponentials all have the same asymptotic growth rate. (trivial proof: let f, g be m/n-th exponentials, f ∊ o(g) for contradiction. Then f-composed-with-itself-n-times is o(g-cwi n-times), which contradicts both sides being Θ(e^x-composed-with-itself-m-times). So f∊Θ(g)).

  60. Thomas S. Says:

    The blog isn’t letting me use “less than” signs. The vanished text was,

    …Start with any real numbers 1 ≨ a_1 ≨ a_2 ≨ … ≨ a_n =e, and any monotone increasing function…

    (substituting “≨” for the illegal sign)

  61. Thomas S. Says:

    caveat – this doesn’t actually distinguish most complexity functions. For instance, x^log(x) is superpolynomial but smaller than all 1/n-th exponentials (if you compose x^log(x) with itself n times, you only get e^(log(x)^(2^n))). And the same is true of x^(A^(-1)(x)). So you can’t distinguish x^log(x) and x^(A^(-1)(x)) by comparing them with all m/n-th exponentials. Unlike real numbers, which in fact can be defined by comparing them with every rational number.

    Hmm, is this general? Can there exist a countable set of asymptotic growth rates which are “dense” in the same sense as the rational subset of the reals, or if not how big would such a set have to be? (I think I’ve heard someone ask this question before, was it you Scott?)

  62. Nagesh Adluru Says:

    Oh right got it!

  63. Scott Says:

    I think I’ve heard someone ask this question before, was it you Scott?

    Probably not, but good question! Since there are no more functions f:N→N than there are reals, it seems plausible that there would be a countable “dense” subset — but what’s the right notion of convergence here?

  64. Jadagul Says:

    Or another way of putting it: the continuous functions are dense in L2, so every function is arbitrarily close to a continuous function. Just like the rationals in the reals; if you pick a random real number, the chance that it’s rational is 0 (the measure of the rationals in the reals is 0). But we can do pretty much all arithmetic without leaving the rationals.

  65. James Cook Says:

    — but what’s the right notion of convergence here?

    Presumably the order topology, induced by the order relation “grows asymptotically faster than”.

  66. Andy D Says:

    I don’t see any clear sense in which there can be only a countable dense subset of growth rates. For instance, given any countable set of integer sequences, you can always diagonalize to produce a new sequence that asymptotically dominates all of them–or is dominated by all of them, or incomparable with all of them.

    Also, an ‘order topology’ is usually understood relative to a total ordering, whereas the order James mentions is only a partial order (some sequences are asymptotically incomparable). I don’t know of any general method for wringing topologies out of POs.

    There is a way around this latter difficulty (which doesn’t, however, eliminate the first one):

    -fix a nonprincipal ultrafilter U on the natural numbers;

    -identify any two integer sequences that are equal on a set in U (i.e. a set deemed ‘large’ by U), forming equivalence classes;

    -now order the classes by
    [ {a_i} ]

  67. Andy D Says:

    (comment appears cut-off on my browser–continuing:)

    -now order the classes by
    [ {a_i} ]

  68. Andy Says:

    (brackets reserved for html?)

    -now order the classes in the way induced by
    {a_i}

  69. Andy Says:

    (frick…)

    -now order the classes in the way induced by
    (a_i)

  70. Andy Says:

    OK, I give up. Anyway, you get a total order on equiv. classes, whose order topology doesn’t have a countable dense subset (again by diagonalization), even though it’s become ‘easier’ for sequences to serve as upper/lower bounds.

  71. Andy Says:

    The classic example is the family of growth rates f_b(x) = x^b,
    b a real number–uncountably many, totally ordered and quite distinct from each other. This can be adapted to integer sequences, and can also be adapted to give a continuum-sized collection of infinite sets of integers, such that any pair of sets have finite intersection.

  72. Niel Says:

    Re: problems with greater-than and lesser-then symbols: if the following attempt to write these symbols using ISO entity codes works, then the rest of this post will display properly —
    < can be produced by writing &gt;, and
    > can be produced by writing &lt;.
    — and now to try to turn off the italics.

  73. Andy D Says:

    Thanks Niel…

    I was saying

    -now order the classes by
    [ {a_i} ] &gt [ {b_i} ] if the set {i: a_i &gt b_i} is in U, i.e. ‘large’.

    This is a consistent definition giving a total order.

  74. Gil Says:

    Raoul wrote:” Does anyone have a clue about why the half exponential function might not be analytical in a neighborhood of the positive real numbers? Assuming it could be expanded in a power series leads to an interesting set of equations for the coefficients.”

    Regarding the power series I do not know what to do with f(f(x))=exp (x). But if you want f(f(x))=exp(x)-1, which seems quite as good, and write f= a_1x+a_2x^2+a_3x^3+… you do get equations which seem solvable (in theory) and in practice I calculated a_1=1, a_2=1/4, a_3=1/48, a_4=0 and a_5 is positive but a bit ugly. I do not know if the coefficients are of some combinatorial interest and if the emerging power series is related to some honest real function.

  75. James Cook Says:

    Also, an ‘order topology’ is usually understood relative to a total ordering, whereas the order James mentions is only a partial order (some sequences are asymptotically incomparable). I don’t know of any general method for wringing topologies out of POs.

    Why can’t we just do the same thing, and take the topology generated by sets of the form {x: x >= a}?

    In any case, I thought Scott had given us a total ordering (see #21-22).

  76. Thomas S. Says:

    I don’t see any clear sense in which there can be only a countable dense subset of growth rates. For instance, given any countable set of integer sequences, you can always diagonalize to produce a new sequence that asymptotically dominates all of them–or is dominated by all of them, or incomparable with all of them.

    I don’t understand how this diagonalization argument works: I see you can find a sequence dominating all of them, but how do you know it’s not a limit point? Dense sets don’t have to be closed under limits of sequences (ℚ certainly isn’t).

    My (faulty?) understanding is this: in your example of {x^a} over all real a∊(1,∞), diagonalization gives you a function dominating all of them asymptotically. Let f(x) = x^1 on [0,1), x^2 on [1,2), and so on; then f(x) dominates all x^a for finite a. But it’s a limit point of the {x^a}, under the “natural” metric induced from the reals,

    d(Θ(x^a), Θ(x^b))=|b-a|,

    just as +∞ is a limit point of ℝ. Likewise, x^1 is dominated by all the {f(x)=x^a | a∊(1,∞)}, but is nevertheless a limit point. Is this topology different from the order topology?

  77. Thomas S. Says:

    I’m worrying about a different problem. There is a whole heirarchy of super-big uncomputable functions, the higher busy beavers, the (i+1)-th being defined as the BB function for a Turing machine with access to an oracle for the (i)-th higher BB function (the i=0 case is the ordinary Turing machine). These all grow faster than all computable functions…. so there’s no dense set of sequences containing only computable functions.

    Of course you can find a countable dense set (under the big-O ordering) for the subset of sequences which are computable. Well it’s kinda trivial – there’s only countably many of them to begin with!

    Anyone know a specific construction for this? m/n-th exponentials (dunno what they’re called in the literature – anyone know?) don’t work, there’s none of them between the n^(log_b(n)) for different bases ‘b’.

  78. Thomas S. Says:

    For the case of uncomputable sequences, is there an axiom of choice construction showing a countable dense set exists? Something like

    -pick any Θ(f(n))
    -now pick two more growth rates, Θ(g(n)) and Θ(h(n)), such that g∊o(f), f∊o(h).
    -in the nth step, pick 2^n more growth rates, one strictly contained in each of the 2^n gaps from the previous step
    -continue forever

    I don’t know if this legal in set theory, but wouldn’t this give you a countably dense set? Or is it only a countable almost-everywhere-dense set? (Or just plain nonsense?)

  79. Thomas S. Says:

    One day, blog software will have native LaTeX support for comments. That day, blogs will be running on quantum servers.

  80. Scott Says:

    I don’t understand how this diagonalization argument works: I see you can find a sequence dominating all of them, but how do you know it’s not a limit point?

    To expand on Andy’s observation: given a collection of sequences, define a new sequence whose ith element is the ith element of the ith sequence. This is clearly also a sequence with some growth rate, not a limit point.

  81. Thomas S. Says:

    I meant, that given a topology for the space of sequences (well actually their equivalence classes, modulo same asymptotic growth), that the new sequence obtained by diagonalization is not a limit point of the countable set (of sequences). (?) I realize that it’s a sequence, but in the “space of sequences” it’s only a point.

  82. Andy D Says:

    Here’s an example. Take for concreteness the countable set of sequences A = { {n^i} }. Diagonalize with a much faster-growing sequence, e.g. s = {3^n}.

    s is not an accumulation point of A, since the ‘open set’ I of sequences asymptotically dominating {2^n} and dominated by {4^n} contains s and is disjoint from A.

    So the diagonalization produces a non-limit point provided you diagonalize with a bit of ‘slack’.

    James (#75)–you’re right, I guess that is a valid way to generate a topology from POs.

  83. Thomas S. Says:

    I see it. Take any countable set of sequences
    A={{a_i}_j}, such that {a_i}_j ∊ o({a_i}_{j+1})

    then even the simple diagonalization of taking the i^2-th element from the i-th sequence,

    b_i={a_{i^2}}_i

    not only dominates all {a_i}, but it’s not a limit point because it’s contained in an open set of sequences which all dominate the {a_i}’s, namely

    {{a_{i^c}}_i}, c∊(1,3).

    (well actually those aren’t integers, but that’s easily fixed.)

    The proof is trivial – it’s just showing that {a_{i^c}}_i ∊ o({a_{i^d}}_i) for all 1 ≤ c

  84. Thomas S. Says:

    Sorry, can’t do “less than”. My last lines were:

    The proof is trivial – it’s just showing that {a_{i^c}}_i ∊ o({a_{i^d}}_i) for all 1 ≤ c ≨ d.

    So indeed, there’s can’t be a countable dense subset of the sequences under the order topology.

    (I think the site was down for a few minutes just now?)

  85. Thomas S. Says:

    Sorry for all those TeX indices, here’s my comment compiled:

    http://img247.imageshack.us/img247/6904/diagonalxt7.jpg

    (in the 23rd century, I predict blogs will support LaTeX commenting.)

  86. Raoul Ohio Says:

    Andy wrote: The classic example is the family of growth rates f_b(x) = x^b,
    b a real number–uncountably many, totally ordered and quite distinct from each other. This can be adapted to integer sequences, and can also be adapted to give a continuum-sized collection of infinite sets of integers, such that any pair of sets have finite intersection.

    Raoul sez: Recall that x^b for real b is only defined as a limit of functions with rational b. Does this have any relevance to x^b considered as a growth rate?

  87. Andy D Says:

    Well, I think it points up a difficulty often confronted in analysis–that when moving to larger spaces like function/sequence spaces, there are multiple, inequivalent notions of convergence. The function x^b, for irrational b, is approached ‘pointwise’ (that is, fixing any x) by a sequence of functions x^{b_i}, for rational values b_i approximating b, but considered as an asymptotic growth rate x^b is ‘buffered’ from these functions by the ‘interval’ of growth rates

    [ x^b – log(x), x^b + log(x) ].

  88. John Sidles Says:

    Since this thread is winding down, let me say that it is pretty remarkable that no one (including me) has done one of the following two: (1) prove that f(f(x))=e^x has no functional solution, or (2) provided (at least) a numerically computed graph of the solution.

    The only result I have is that the function h(x) = 1/f(1/x) (which I have named “the Doofus transform of f”) satisfies the Doofus-transform of Scott’s equation, namely, h(h(x)) = e^(-1/x).

    The right-hand side is one of the most famous non-analytic functions. Thus Scott’s equation has an essential singularity in the asymptotic limit. Presumably, this is one reason that the problem is hard to solve.

  89. Thomas S. Says:

    (2) provided (at least) a numerically computed graph of the solution.

    Actually I’ve drawn some solutions in Mathematica (numerically), it’s trivial and boring. Basically, any monotone f(x) on [1,a] for some 1≨a≨e, satisifying f(1)=a, f(a)=e, can be uniquely extended to a half-exponential – trivially so. So there’s tons of solutions, but most of them look pretty artifical, as if someone glued together a whole bunch of different functions (which is basically what it is.) I haven’t thought of any obvious criteria to pick out a “best” or “most natural” half-exponential.

    I don’t see any proof of it being not “elementary” (arithmetic, exponentials, and logs of positive quantities), and I’m not sure how to complete Scott’s proof either (the subtraction & divison part). So I’m useless here. I’ll add to the confusion: do any 1/n-th exponentials have elementary solutions? (the ones defined in comment #59)

  90. Thomas S. Says:

    Andy D – I thought through your diagonalization yesterday – I think this the meat of it (?):

    (i) ∄ an isomorphism φ : ℝ → {Θ(f(n)) | f:ℕ→ℕ}
    such that a≨b ⇔ φ(a)∊o(φ(b))

    (note {Θ(f(n))} simply means {f(n)} modulo the equivalence relation Θ, this is needed to make the φ’s single-valued.)

    This implies the other funny result, that the sequences under order topology have no countable dense subset. I thought about restricting this to only computable sequences, which is a countable set. In that case:

    (ii) ∃ an isomorphism φ : ℚ → {Θ(f(n)) | f:ℕ→ℕ is computable}
    such that a≨b ⇔ φ(a)∊o(φ(b)), however no such φ is computable.

    (I think this is true?) The incomputability is a diagonalization argument: if φ were computable, than you could use it to output

    {φ(n}_n} for n∊ℕ

    which is an incomputable sequence (it dominates all computable sequences ). I haven’t quite figured out how to prove the existance of a φ (axiom of choice maybe?), but there’s no obvious reason it shouldn’t exist. (is there?)

  91. Andy D Says:

    Thomas-
    I’m having difficulty reading since some of your symbols just look like boxes.
    The main thing to note is that growth rates (computable or otherwise) are not totally ordered, so cannot be order-isomorphic to the rationals.
    If you imposed a total order somehow, e.g. in the way I suggested with ultrafilters, then the equivalence classes of computable growth-rates would be a countable, dense-in-itself ordered set without maximal elements but with a minimal element (the class of the constant-1 function).
    Any such set is order-isomorphic to the nonnegative rational numbers, though the isomorphism would definitely be incomputable. If you allowed ‘negative growth rates’ you’d have the order type of the rationals.

    To gain familiarity with properties of total orders, I recommend the book ‘Problems and Theorems in Classical Set Theory’. For more on the computability-theoretic aspects, there is a chapter in ‘Handbook of Recursive Mathematics, Vol. II’ by Rod Downey.

  92. Gil Says:

    Regarding half exponentials as power series John Stembridge kindely gave me the following information:

    The formal power series of the form x + higher terms
    form a group w.r.t. composition, and you can define a fractional composition-power g^{r}(x) for any g(x) in this group so that

    g^{r}(g^{s}(x)) = g^{r+s}(x)
    (g^{r})^{s} = g^{rs}

    This is discussed in Problem 5.52 in Richard Stanley’s “Enumerative Combinatorics 2” (EC2) Parts (a) and (b).

    Part ( c) of Problem 5.52 reads as follows:

    [5-] (unsolved) “Investigate the combinatorial
    significance of fractional composition”, Stanley offers as an example the computation of (exp(x)-1)^{1/2} .

    He then gives the terms up through x^16, and mentions that the signs do not seem to be predictable.

    (I do not know anything about radius of convergence etc.)

  93. David Speyer Says:

    I think I can prove that, if F is an analytic function from R to R with everywhere positive derivative and a unique fixed point x_0 such that F'(x_0)>1 then there is an analytic function g with g(g(x))=F(x). The hypotheses don’t apply to e^x, but they do apply to F(x)=e^x+x-1, which might satisfy a complexity theorist.

    “Proof:” Without loss of generality, let the fixed point of F be 0. Let F'(0)=K, and recall that K>1. By the uniqueness of the fixed point, we have F(x)>x for all positive x. In particular, by the intermediate value theorem the range of F contains all positive numbers and by a similar argument it contains all negative numbers. In short, F is a bijection. Since the derivative of F is always positive, its inverse is invertible.

    Let x be a positive real number. Then F^{-1}(x), F^{-2}(x), … is a decreasing sequence of positive numbers, and hence has a limit, which must be zero. In particular, it is eventually close enough to zero that we can apply Taylor’s theorem and deduce that F^{-n-1}(x)=F^{-n}(x)/K+O(F^{-n}(x)^2) Now, consider the sequence a_n(x):=F^{-n}(x)*K^n. Now, elementary bounds (by which I mean I am too lazy to check this, but I remember thinking through an argument like this several years ago) show that a_n(x) is a Cauchy sequence. Set u(x)=lim_{n ->infinity} a_n(x). Similarly, we can define u for x negative. Then u is a globally defined function from R to R with u(F(x))=K u(x). Now, I think that I should be able to show that u is analytic, has analytic inverse, and is bijective from R to R. This is the part that I haven’t checked carefully, but I’ll write below how I think an argument should go.

    In the meantime, if u has the properties described above, then set f(x)=u^{-1}(K^{1/2}*u(x)). Then f(f(x))=u^{-1}(K*u(x))=F(x).

    How to get analyticity and bijectiveness? Well, if we knew that u was continuous and has both a positive and a negative value, then surjectivity would follow from analyticity. This is because the functional equation u(F(x))=K u(x) shows that if a is in the image of u than so is K^n a for every integer n. We could fill in the gaps by the intermediate value theorem. Similarly, if we knew that u'(0) was defined and nonzero then we would know that u'(x) was never zero, as the functional equation shows that 0 would be an accumulation point of the zeroes of u'(x). Of course, if u'(x) is never zero, than u is injective. So we can get bijectivity from knowing that u is analytic plus some facts that seems like they might be easy.

    How to get that u is analytic? Notice that we only need to prove this in an open neighborhood of zero, the functional equation does the rest. Well, each a_n(x) is analytic. A pointwise limit of analytic functions, sadly, can be practically anything. Even a uniform limit on an open interval of the real line can be any continuous function. However, if a limit of analytic functions is uniform on an open disc in the complex plane, then I think that the limit function is also analytic. (I can provide a sketchy proof of this claim, but I suspect that someone else can provide a reference faster.) And it shouldn’t be too hard to switch our attention to the complex plane and do the Cauchy sequence argument in enough detail to establish uniformity near zero.

  94. Scott Says:

    Gil and David: Thanks!!

  95. Gil Says:

    3.B. Ekhad kindly sent me some relevant information. Enjoy!

    The first , 60, terms of the sequence of n-th root of the (normalized) coeff. of the compositional square-root of

    exp(x) – 1

    i.e. (i!*abs(a[i])^(1/i) ) is:

    [1., .7071067812, .5000000000, .3556558820e-2, .5000000025, .6160954692, .5000000438, .9422033533, 1.141588897, .8867266098, 1.571135778, 1.747680689, 2.001784601, 2.347955937, 2.393358590, 2.986557134, 2.990836889, 3.682232149, 3.911485332, 4.437611129, 4.816722801, 5.249579323, 5.773867033, 6.108382310, 6.796789807, 6.987292027, 7.890607765, 7.768025153, 9.057620512, 8.892137235, 10.29888971, 10.47476655, 11.61479393, 11.97899823, 13.00524359, 13.51992747, 14.46974857, 15.12157912, 16.00739745, 16.79303127, 17.61675546, 18.53866538, 19.29565117, 20.36091315, 21.04076668, 22.26125106, 22.84682019, 24.24063142, 24.70476978, 26.29969360, 26.59719907, 28.43887700, 28.48297509, 30.65848565, 30.21083414, 32.95872702, 31.91811284, 35.33973616, 35.16254282, 37.80159105]

    The first , 60, terms of the sequence of n-th root

    of the (normalized) coeff. of the compositional square-root of

    x + exp(x) – 1

    i.e. (i!*abs(a[i])^(1/i) ) is:

    [1.414213562, .5411961003, .5312298753, .4075173333, .4540862325, .4688660027, .5555680564, .6865190893, .6922942652, .8232744049, .9501467990, .9528784858, 1.097314225, 1.227926065, 1.241007972, 1.369967101, 1.511064885, 1.541041759, 1.637360756, 1.795029968, 1.842376984, 1.896561494, 2.077975673, 2.141280484, 2.142612884, 2.359578585, 2.437429471, 2.356887781, 2.640053423, 2.731541178, 2.526497089, 2.919524541, 3.024329542, 2.960631748, 3.197773947, 3.316206329, 3.299996687, 3.474141532, 3.607241896, 3.620503034, 3.747461530, 3.897123980, 3.931361641, 4.018510444, 4.187776039, 4.241793668, 4.272582828, 4.472842341, 4.541856384, 4.515818346, 4.755244927, 4.833247609, 4.753247072, 5.021323258, 5.007172933, 5.266556757, 5.425935683, 5.492949726, 5.566844742, 5.810624950]

  96. DavidS Says:

    I’ll be out of touch with the internet next week, so I want to let anyone still reading know that I haven’t forgotten this thread. In particular, I am curious why the sequences Gil presents appear to have a roughly linear growth. In the mean time, two very quick citations to back up some of what I’ve said.

    (1) I still haven’t gotten around to looking up the original, but Concrete Mathematics, in chapter 9, cites the same result of G.H. Hardy that I did and confirms that the reference is Orders of Infinity.

    (2) In Asymptotic Methods in Analysis , by N. G. de Bruijn, (Section 8.2, if I recall correctly, I looked this up in my office yesterday but am now not there) de Brujin states that, if phi(x) is an analytic function of the form phi(x)=lambda x + … with lambda 1 by looking at the inverse function to phi, but that’s my thought, not his. He doesn’t give a proof of the assertion, however.

  97. DavidS Says:

    I’m sorry! I forget about greater and less than signs!

    after the expression for phi, my previous comment should read “with lambda less than 1, then there is an analytic function u defined in a neighborhood of 0 with phi(u(x))=u(lambda x). It seems to me that the same argument should hold when lambda is great than 1 by looking at the inverse function to phi, but that’s my thought, not his. He doesn’t give a proof of the assertion, however.

  98. Neil Dickson Says:

    The question as to whether f:f(f(n))=2^n is exponential or not is actually much simpler than finding a closed form of it or approximation to it.

    n^(logn) is exponential. It is equal to 2^((logn)^2). Let this be g(n).

    g(g(n)) = 2^((log(2^((logn)^2)))^2)
    = 2^(((logn)^2)^2)
    = 2^((logn)^4)

    g(g(n)) is then o(2^n) and g(n) is exponential. Therefore f(n) must also be exponential.

  99. DavidS Says:

    n^{log n} is not exponential, if by exponential you mean “grows faster than e^{c n} for some positive c”. As you point out, it is e^{(log n)^2}, which is much less than e^{cn}. Indeed, n^{log n} is the standard example of a function which grows faster than n^k for any k but slower than e^{c n} for any positive c.

  100. Neil Dickson Says:

    That’s quite a limited definition of “exponential”, since by that definition, e^(sqrt(n)) is not exponential, when most would consider it to be exponential. By “exponential”, I meant 2^(small_omega(log n)), or alternatively, n^(small_omega(1)).

  101. Gil Says:

    Dear Neil,
    The terminology I find useful is the following: Let c>1

    n^c is polynomial

    exp ((log n)^c) is quasi-polynomial (e.g. n^log n)

    exp (exp (loglog n)^c)) is quasi-quasi-polynomial

    exp (exp (exp (log log log n)^c))) is quasi-quasi-quasi polynomial and so on…

    On the exponential side let 0

  102. Gil Says:

    On the exponential side let 1>c>0

    exp (n) is exponential (and also exp (an) a>0)

    exp (n^c) is sub-exponential (CS people will call it mildly exponential)

    exp exp (log n^c) is sub-sub exponential (CS: sub-exponential)

    exp exp exp (log log n)^c is sub-sub-sub exponential (CS: sun-sub-exponential)

    and so on. (The difference between the math and CS usage is similar to the difference use of Billion and Trillion between the US and Europe)

    so the nice thing for f: f(f(n))=exp (n) is that it is bigger than all the quasi-quasi-quasi-quasi …-polynomial growth and smallet than all the sub-sub-sub-sub-…-exponential

  103. Peter Shor Says:

    You left out inverse Ackermann star, the number of times you need to iterate the inverse Ackermann function starting with n before you reduce the number to a constant. And 2^inverse Ackermann(n) is the correct growth rate for Davenport-Schinzel sequences of order 4, as I showed with Pankaj Agarwal and Micha Sharir.

  104. Neil Dickson Says:

    Thanks for the clarification. That’s some interesting terminology; it just sounds a bit odd in that then there are NP-hard problems that do not take exponential time simply because “exponential” is defined differently. Using your terminology, several NP-hard problems that I know would have a “sub-exponential” worst case and some would have a “quasi-polynomial” average case.

    Also, “quasi-polynomial” means something different in terms of abstract algebra: http://en.wikipedia.org/wiki/Quasi-polynomial
    Perhaps “super-polynomial” would be a possible name for it to have consistent naming with your definition of “sub-exponential”. However, then it makes the ambiguity more evident in that “sub-exponential” usually refers to “anything less than exponential” just as “super-polynomial” usually refers to “anything greater than polynomial”.

  105. Gil Says:

    Neil,

    From the point of view of complexity theoretic polynomial reductions, exp (n^c) where c>0 is still exponential. (This is the reason for the slightly different terminology.) So I would not expect to find NP-complete problems that are sub-exponential in the sense of CS: (namely exp (o(n)). Or in other words, the belief is that NP-hard problems are indeed exponential (in the CS sense).

    (Regarding quasi-polynomial growth; One very lovely problem is to distinguish between Boolean functions that can be computed by a bounded depth polynomial size Boolean circuits to those which requires quasi-polynomial size bounded depth circuits; e.g. we can find a clique of size log n in a graph with n verices via a depth-2 size n^log n
    Boolean circuit. Show that this cannot be done in a polynomial size depth-1000000 circuit.)