Integer square root

From HandWiki
Revision as of 20:44, 6 February 2024 by LinuxGuru (talk | contribs) (linkage)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Greatest integer less than or equal to square root

In number theory, the integer square root (isqrt) of a non-negative integer n is the non-negative integer m which is the greatest integer less than or equal to the square root of n, [math]\displaystyle{ \operatorname{isqrt}(n) = \lfloor \sqrt n \rfloor. }[/math]

For example, [math]\displaystyle{ \operatorname{isqrt}(27) = \lfloor \sqrt{27} \rfloor = \lfloor 5.19615242270663 ... \rfloor = 5. }[/math]

Introductory remark

Let [math]\displaystyle{ y }[/math] and [math]\displaystyle{ k }[/math] be non-negative integers.

Algorithms that compute (the decimal representation of) [math]\displaystyle{ \sqrt y }[/math] run forever on each input [math]\displaystyle{ y }[/math] which is not a perfect square.[note 1]

Algorithms that compute [math]\displaystyle{ \lfloor \sqrt y \rfloor }[/math] do not run forever. They are nevertheless capable of computing [math]\displaystyle{ \sqrt y }[/math] up to any desired accuracy [math]\displaystyle{ k }[/math].

Choose any [math]\displaystyle{ k }[/math] and compute [math]\displaystyle{ \lfloor \sqrt {y \times 100^k} \rfloor }[/math].

For example (setting [math]\displaystyle{ y = 2 }[/math]): [math]\displaystyle{ \begin{align} & k = 0: \lfloor \sqrt {2 \times 100^{0}} \rfloor = \lfloor \sqrt {2} \rfloor = 1 \\ & k = 1: \lfloor \sqrt {2 \times 100^{1}} \rfloor = \lfloor \sqrt {200} \rfloor = 14 \\ & k = 2: \lfloor \sqrt {2 \times 100^{2}} \rfloor = \lfloor \sqrt {20000} \rfloor = 141 \\ & k = 3: \lfloor \sqrt {2 \times 100^{3}} \rfloor = \lfloor \sqrt {2000000} \rfloor = 1414 \\ & \vdots \\ & k = 8: \lfloor \sqrt {2 \times 100^{8}} \rfloor = \lfloor \sqrt {20000000000000000} \rfloor = 141421356 \\ & \vdots \\ \end{align} }[/math]

Compare the results with [math]\displaystyle{ \sqrt {2} = 1.41421356237309504880168872420969807856967187537694 ... }[/math]

It appears that the multiplication of the input by [math]\displaystyle{ 100^k }[/math] gives an accuracy of k decimal digits.[note 2]

To compute the (entire) decimal representation of [math]\displaystyle{ \sqrt y }[/math], one can execute [math]\displaystyle{ \operatorname{isqrt}(y) }[/math] an infinite number of times, increasing [math]\displaystyle{ y }[/math] by a factor [math]\displaystyle{ 100 }[/math] at each pass.

Assume that in the next program ([math]\displaystyle{ \operatorname{sqrtForever} }[/math]) the procedure [math]\displaystyle{ \operatorname{isqrt}(y) }[/math] is already defined and — for the sake of the argument — that all variables can hold integers of unlimited magnitude.

Then [math]\displaystyle{ \operatorname{sqrtForever}(y) }[/math] will print the entire decimal representation of [math]\displaystyle{ \sqrt y }[/math].[note 3]

// Print sqrt(y), without halting
void sqrtForever(unsigned int y)
{
	unsigned int result = isqrt(y);
	printf("%d.", result);	// print result, followed by a decimal point

	while (true)	// repeat forever ...
	{
		y = y * 100;	// theoretical example: overflow is ignored
		result = isqrt(y);
		printf("%d", result % 10);	// print last digit of result
	}
}

The conclusion is that algorithms which compute isqrt() are computationally equivalent to algorithms which compute sqrt().[1]

Basic algorithms

The integer square root of a non-negative integer [math]\displaystyle{ y }[/math] can be defined as [math]\displaystyle{ \lfloor \sqrt y \rfloor = x : x^2 \leq y \lt (x+1)^2, x \in \mathbb{N} }[/math]

For example, [math]\displaystyle{ \operatorname{isqrt}(27) = \lfloor \sqrt{27} \rfloor = 5 }[/math] because [math]\displaystyle{ 6^2 \gt 27 \text{ and } 5^2 \ngtr 27 }[/math].

Algorithm using linear search

The following C programs are straightforward implementations.

Template:Flex columns

Linear search using addition

In the program above (linear search, ascending) one can replace multiplication by addition, using the equivalence [math]\displaystyle{ (L+1)^2 = L^2 + 2L + 1 = L^2 + 1 + \sum_{i=1}^L 2. }[/math]

// Integer square root
// (linear search, ascending) using addition
unsigned int isqrt(unsigned int y)
{
	unsigned int L = 0;
	unsigned int a = 1;
	unsigned int d = 3;

	while (a <= y)
	{
		a = a + d;	// (a + 1) ^ 2
		d = d + 2;
		L = L + 1;
	}

	return L;
}

Algorithm using binary search

Linear search sequentially checks every value until it hits the smallest [math]\displaystyle{ x }[/math] where [math]\displaystyle{ x^2 \gt y }[/math].

A speed-up is achieved by using binary search instead. The following C-program is an implementation.

// Integer square root (using binary search)
unsigned int isqrt(unsigned int y)
{
	unsigned int L = 0;
	unsigned int M;
	unsigned int R = y + 1;

    while (L != R - 1)
    {
        M = (L + R) / 2;

		if (M * M <= y)
			L = M;
		else
			R = M;
	}

    return L;
}

Numerical example

For example, if one computes [math]\displaystyle{ \operatorname{isqrt}(2000000) }[/math] using binary search, one obtains the [math]\displaystyle{ [L,R] }[/math] sequence [math]\displaystyle{ \begin{align} & [0,2000001] \rightarrow [0,1000000] \rightarrow [0,500000] \rightarrow [0,250000] \rightarrow [0,125000] \rightarrow [0,62500] \rightarrow [0,31250] \rightarrow [0,15625] \\ & \rightarrow [0,7812] \rightarrow [0,3906] \rightarrow [0,1953] \rightarrow [976,1953] \rightarrow [976,1464] \rightarrow [1220,1464] \rightarrow [1342,1464] \rightarrow [1403,1464] \\ & \rightarrow [1403,1433] \rightarrow [1403,1418] \rightarrow [1410,1418] \rightarrow [1414,1418] \rightarrow [1414,1416] \rightarrow [1414,1415] \end{align} }[/math]

This computation takes 21 iteration steps, whereas linear search (ascending, starting from [math]\displaystyle{ 0 }[/math]) needs 1414 steps.

Algorithm using Newton's method

One way of calculating [math]\displaystyle{ \sqrt{n} }[/math] and [math]\displaystyle{ \operatorname{isqrt}(n) }[/math] is to use Heron's method, which is a special case of Newton's method, to find a solution for the equation [math]\displaystyle{ x^2 - n = 0 }[/math], giving the iterative formula [math]\displaystyle{ x_{k+1} = \frac{1}{2}\!\left(x_k + \frac{n}{x_k}\right), \quad k \ge 0, \quad x_0 \gt 0. }[/math]

The sequence [math]\displaystyle{ \{x_k\} }[/math] converges quadratically to [math]\displaystyle{ \sqrt{n} }[/math] as [math]\displaystyle{ k\to\infty }[/math].

Stopping criterion

One can prove that [math]\displaystyle{ c=1 }[/math] is the largest possible number for which the stopping criterion [math]\displaystyle{ |x_{k+1} - x_{k}| \lt c }[/math] ensures [math]\displaystyle{ \lfloor x_{k+1} \rfloor=\lfloor \sqrt n \rfloor }[/math] in the algorithm above.

In implementations which use number formats that cannot represent all rational numbers exactly (for example, floating point), a stopping constant less than 1 should be used to protect against round-off errors.

Domain of computation

Although [math]\displaystyle{ \sqrt{n} }[/math] is irrational for many [math]\displaystyle{ n }[/math], the sequence [math]\displaystyle{ \{x_k\} }[/math] contains only rational terms when [math]\displaystyle{ x_0 }[/math] is rational. Thus, with this method it is unnecessary to exit the field of rational numbers in order to calculate [math]\displaystyle{ \operatorname{isqrt}(n) }[/math], a fact which has some theoretical advantages.

Using only integer division

For computing [math]\displaystyle{ \lfloor \sqrt n \rfloor }[/math] for very large integers n, one can use the quotient of Euclidean division for both of the division operations. This has the advantage of only using integers for each intermediate value, thus making the use of floating point representations of large numbers unnecessary. It is equivalent to using the iterative formula [math]\displaystyle{ x_{k+1} = \left\lfloor \frac{1}{2}\!\left(x_k + \left\lfloor \frac{n}{x_k} \right\rfloor \right) \right\rfloor, \quad k \ge 0, \quad x_0 \gt 0, \quad x_0 \in \mathbb{Z}. }[/math]

By using the fact that [math]\displaystyle{ \left\lfloor \frac{1}{2}\!\left(x_k + \left\lfloor \frac{n}{x_k} \right\rfloor \right) \right\rfloor = \left\lfloor \frac{1}{2}\!\left(x_k + \frac{n}{x_k} \right) \right\rfloor, }[/math]

one can show that this will reach [math]\displaystyle{ \lfloor \sqrt n \rfloor }[/math] within a finite number of iterations.

In the original version, one has [math]\displaystyle{ x_k \ge \sqrt n }[/math] for [math]\displaystyle{ k \ge 1 }[/math], and [math]\displaystyle{ x_k \gt x_{k+1} }[/math] for [math]\displaystyle{ x_k \gt \sqrt n }[/math]. So in the integer version, one has [math]\displaystyle{ \lfloor x_k \rfloor \ge \lfloor\sqrt n\rfloor }[/math] and [math]\displaystyle{ x_k \ge \lfloor x_k \rfloor \gt x_{k+1} \ge \lfloor x_{k+1}\rfloor }[/math] until the final solution [math]\displaystyle{ x_s }[/math] is reached. For the final solution [math]\displaystyle{ x_s }[/math], one has [math]\displaystyle{ \lfloor \sqrt n\rfloor\le\lfloor x_s\rfloor \le \sqrt n }[/math] and [math]\displaystyle{ \lfloor x_{s+1} \rfloor \ge \lfloor x_s \rfloor }[/math], so the stopping criterion is [math]\displaystyle{ \lfloor x_{k+1} \rfloor \ge \lfloor x_k \rfloor }[/math].

However, [math]\displaystyle{ \lfloor \sqrt n \rfloor }[/math] is not necessarily a fixed point of the above iterative formula. Indeed, it can be shown that [math]\displaystyle{ \lfloor \sqrt n \rfloor }[/math] is a fixed point if and only if [math]\displaystyle{ n + 1 }[/math] is not a perfect square. If [math]\displaystyle{ n + 1 }[/math] is a perfect square, the sequence ends up in a period-two cycle between [math]\displaystyle{ \lfloor \sqrt n \rfloor }[/math] and [math]\displaystyle{ \lfloor \sqrt n \rfloor + 1 }[/math] instead of converging.

Example implementation in C

// Square root of integer
unsigned int int_sqrt(unsigned int s)
{
	// Zero yields zero
    // One yields one
	if (s <= 1) 
		return s;

    // Initial estimate (must be too high)
	unsigned int x0 = s / 2;

	// Update
	unsigned int x1 = (x0 + s / x0) / 2;

	while (x1 < x0)	// Bound check
	{
		x0 = x1;
		x1 = (x0 + s / x0) / 2;
	}		
	return x0;
}

Numerical example

For example, if one computes the integer square root of 2000000 using the algorithm above, one obtains the sequence [math]\displaystyle{ \begin{align} & 1000000 \rightarrow 500001 \rightarrow 250002 \rightarrow 125004 \rightarrow 62509 \rightarrow 31270 \rightarrow 15666 \rightarrow 7896 \\ & \rightarrow 4074 \rightarrow 2282 \rightarrow 1579 \rightarrow 1422 \rightarrow 1414 \rightarrow 1414 \end{align} }[/math] In total 13 iteration steps are needed. Although Heron's method converges quadratically close to the solution, less than one bit precision per iteration is gained at the beginning. This means that the choice of the initial estimate is critical for the performance of the algorithm.

When a fast computation for the integer part of the binary logarithm or for the bit-length is available (like e.g. std::bit_width in C++20), one should better start at [math]\displaystyle{ x_0 = 2^{\lfloor (\log_2 n) /2 \rfloor+1}, }[/math] which is the least power of two bigger than [math]\displaystyle{ \sqrt n }[/math]. In the example of the integer square root of 2000000, [math]\displaystyle{ \lfloor \log_2 n \rfloor = 20 }[/math], [math]\displaystyle{ x_0 = 2^{11} = 2048 }[/math], and the resulting sequence is [math]\displaystyle{ 2048 \rightarrow 1512 \rightarrow 1417 \rightarrow 1414 \rightarrow 1414. }[/math] In this case only four iteration steps are needed.

Digit-by-digit algorithm

The traditional pen-and-paper algorithm for computing the square root [math]\displaystyle{ \sqrt{n} }[/math] is based on working from higher digit places to lower, and as each new digit pick the largest that will still yield a square [math]\displaystyle{ \leq n }[/math]. If stopping after the one's place, the result computed will be the integer square root.

Using bitwise operations

If working in base 2, the choice of digit is simplified to that between 0 (the "small candidate") and 1 (the "large candidate"), and digit manipulations can be expressed in terms of binary shift operations. With * being multiplication, << being left shift, and >> being logical right shift, a recursive algorithm to find the integer square root of any natural number is:

def integer_sqrt(n: int) -> int:
    assert n >= 0, "sqrt works for only non-negative inputs"
    if n < 2:
        return n

    # Recursive call:
    small_cand = integer_sqrt(n >> 2) << 1
    large_cand = small_cand + 1
    if large_cand * large_cand > n:
        return small_cand
    else:
        return large_cand


# equivalently:
def integer_sqrt_iter(n: int) -> int:
    assert n >= 0, "sqrt works for only non-negative inputs"
    if n < 2:
        return n

    # Find the shift amount. See also [[Find first set|find first set]],
    # shift = ceil(log2(n) * 0.5) * 2 = ceil(ffs(n) * 0.5) * 2
    shift = 2
    while (n >> shift) != 0:
        shift += 2

    # Unroll the bit-setting loop.
    result = 0
    while shift >= 0:
        result = result << 1
        large_cand = (
            result + 1
        )  # Same as result ^ 1 (xor), because the last bit is always 0.
        if large_cand * large_cand <= n >> shift:
            result = large_cand
        shift -= 2

    return result

Traditional pen-and-paper presentations of the digit-by-digit algorithm include various optimizations not present in the code above, in particular the trick of pre-subtracting the square of the previous digits which makes a general multiplication step unnecessary. See Methods of computing square roots § Binary numeral system (base 2) for an example.[2]

In programming languages

Some programming languages dedicate an explicit operation to the integer square root calculation in addition to the general case or can be extended by libraries to this end.

See also

Notes

  1. The square roots of the perfect squares (e.g., 0, 1, 4, 9, 16) are integers. In all other cases, the square roots of positive integers are irrational numbers.
  2. It is no surprise that the repeated multiplication by 100 is a feature in (Jarvis 2006)
  3. The fractional part of square roots of perfect squares is rendered as 000....

References

External links