Iterated logarithm
In computer science, the iterated logarithm of [math]\displaystyle{ n }[/math], written 10%">* [math]\displaystyle{ n }[/math] (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to [math]\displaystyle{ 1 }[/math].[1] The simplest formal definition is the result of this recurrence relation:
- [math]\displaystyle{ \log^* n := \begin{cases} 0 & \mbox{if } n \le 1; \\ 1 + \log^*(\log n) & \mbox{if } n \gt 1 \end{cases} }[/math]
On the positive real numbers, the continuous super-logarithm (inverse tetration) is essentially equivalent:
- [math]\displaystyle{ \log^* n = \lceil \mathrm {slog}_e(n) \rceil }[/math]
i.e. the base b iterated logarithm is [math]\displaystyle{ \log^* n = y }[/math] if n lies within the interval [math]\displaystyle{ ^{y-1}b\lt n\leq\ ^{y}b }[/math], where [math]\displaystyle{ {^{y}b} = \underbrace{b^{b^{\cdot^{\cdot^{b}}}}}_y }[/math]denotes tetration. However, on the negative real numbers, log-star is [math]\displaystyle{ 0 }[/math], whereas [math]\displaystyle{ \lceil \text{slog}_e(-x)\rceil = -1 }[/math] for positive [math]\displaystyle{ x }[/math], so the two functions differ for negative arguments.
The iterated logarithm accepts any positive real number and yields an integer. Graphically, it can be understood as the number of "zig-zags" needed in Figure 1 to reach the interval [math]\displaystyle{ [0, 1] }[/math] on the x-axis.
In computer science, Template:Lg-star is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base [math]\displaystyle{ 2 }[/math]) instead of the natural logarithm (with base e).
Mathematically, the iterated logarithm is well-defined for any base greater than [math]\displaystyle{ e^{1/e} \approx 1.444667 }[/math], not only for base [math]\displaystyle{ 2 }[/math] and base e.
Analysis of algorithms
The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as:
- Finding the Delaunay triangulation of a set of points knowing the Euclidean minimum spanning tree: randomized O(n 10%">* n) time.[2]
- Fürer's algorithm for integer multiplication: O(n log n 2O(Template:Lg-star n)).
- Finding an approximate maximum (element at least as large as the median): Template:Lg-star n − 4 to Template:Lg-star n + 2 parallel operations.[3]
- Richard Cole and Uzi Vishkin's distributed algorithm for 3-coloring an n-cycle: O(10%">* n) synchronous communication rounds.[4]
The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential:
[math]\displaystyle{ {^{y}b} = \underbrace{b^{b^{\cdot^{\cdot^{b}}}}}_y \gg \underbrace{b^{b^{\cdot^{\cdot^{b^{y}}}}}}_n }[/math]
the inverse grows much slower: [math]\displaystyle{ \log_b^* x \ll \log_b^n x }[/math].
For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5.
x | Template:Lg-star x |
---|---|
(−∞, 1] | 0 |
(1, 2] | 1 |
(2, 4] | 2 |
(4, 16] | 3 |
(16, 65536] | 4 |
(65536, 265536] | 5 |
Higher bases give smaller iterated logarithms. Indeed, the only function commonly used in complexity theory that grows more slowly is the inverse Ackermann function[citation needed].
Other applications
The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. The additive persistence of a number, the number of times someone must replace the number by the sum of its digits before reaching its digital root, is [math]\displaystyle{ O(\log^* n) }[/math].
In computational complexity theory, Santhanam[5] shows that the computational resources DTIME — computation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to [math]\displaystyle{ n\sqrt{\log^*n}. }[/math]
References
- ↑ Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) [1990]. "The iterated logarithm function, in Section 3.2: Standard notations and common functions". Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. pp. 58–59. ISBN 0-262-03384-4.
- ↑ Devillers, Olivier (1992). "Randomization yields simple [math]\displaystyle{ O(n\log^\ast n) }[/math] algorithms for difficult [math]\displaystyle{ \Omega(n) }[/math] problems". International Journal of Computational Geometry & Applications 2 (1): 97–111. doi:10.1142/S021819599200007X.
- ↑ "Finding an approximate maximum". SIAM Journal on Computing 18 (2): 258–267. 1989. doi:10.1137/0218017.
- ↑ "Deterministic coin tossing with applications to optimal parallel list ranking". Information and Control 70 (1): 32–53. 1986. doi:10.1016/S0019-9958(86)80023-7.
- ↑ Santhanam, Rahul (2001). "Proceedings of the 16th Annual IEEE Conference on Computational Complexity, Chicago, Illinois, USA, June 18-21, 2001". IEEE Computer Society. pp. 286–294. doi:10.1109/CCC.2001.933895.
Original source: https://en.wikipedia.org/wiki/Iterated logarithm.
Read more |