Next-bit test

From HandWiki
Short description: Testing method for testing the randomness of pseudo-radom number generators

In cryptography and the theory of computation, the next-bit test[1] is a test against pseudo-random number generators. We say that a sequence of bits passes the next bit test for at any position [math]\displaystyle{ i }[/math] in the sequence, if any attacker who knows the [math]\displaystyle{ i }[/math] first bits (but not the seed) cannot predict the [math]\displaystyle{ (i+1) }[/math]st with reasonable computational power.

Precise statement(s)

Let [math]\displaystyle{ P }[/math] be a polynomial, and [math]\displaystyle{ S=\{S_k\} }[/math] be a collection of sets such that [math]\displaystyle{ S_k }[/math] contains [math]\displaystyle{ P(k) }[/math]-bit long sequences. Moreover, let [math]\displaystyle{ \mu_k }[/math] be the probability distribution of the strings in [math]\displaystyle{ S_k }[/math].

We now define the next-bit test in two different ways.

Boolean circuit formulation

A predicting collection[2] [math]\displaystyle{ C=\{C_k^i\} }[/math] is a collection of boolean circuits, such that each circuit [math]\displaystyle{ C_k^i }[/math] has less than [math]\displaystyle{ P_C(k) }[/math] gates and exactly [math]\displaystyle{ i }[/math] inputs. Let [math]\displaystyle{ p_{k,i}^C }[/math] be the probability that, on input the [math]\displaystyle{ i }[/math] first bits of [math]\displaystyle{ s }[/math], a string randomly selected in [math]\displaystyle{ S_k }[/math] with probability [math]\displaystyle{ \mu_k(s) }[/math], the circuit correctly predicts [math]\displaystyle{ s_{i+1} }[/math], i.e. :

[math]\displaystyle{ p_{k,i}^C={\mathcal P} \left[ C_k(s_1\ldots s_i)=s_{i+1} \right | s\in S_k\text{ with probability }\mu_k(s)] }[/math]

Now, we say that [math]\displaystyle{ \{S_k\}_k }[/math] passes the next-bit test if for any predicting collection [math]\displaystyle{ C }[/math], any polynomial [math]\displaystyle{ Q }[/math] :

[math]\displaystyle{ p_{k,i}^C\lt \frac{1}{2}+\frac{1}{Q(k)} }[/math]

Probabilistic Turing machines

We can also define the next-bit test in terms of probabilistic Turing machines, although this definition is somewhat stronger (see Adleman's theorem). Let [math]\displaystyle{ \mathcal M }[/math] be a probabilistic Turing machine, working in polynomial time. Let [math]\displaystyle{ p_{k,i}^{\mathcal M} }[/math] be the probability that [math]\displaystyle{ \mathcal M }[/math] predicts the [math]\displaystyle{ (i+1) }[/math]st bit correctly, i.e.

[math]\displaystyle{ p_{k,i}^{\mathcal M}={\mathcal P}[M(s_1\ldots s_i)=s_{i+1} | s\in S_k\text{ with probability }\mu_k(s)] }[/math]

We say that collection [math]\displaystyle{ S=\{S_k\} }[/math] passes the next-bit test if for all polynomial [math]\displaystyle{ Q }[/math], for all but finitely many [math]\displaystyle{ k }[/math], for all [math]\displaystyle{ 0\lt i\lt k }[/math]:

[math]\displaystyle{ p_{k,i}^{\mathcal M}\lt \frac{1}{2}+\frac{1}{Q(k)} }[/math]

Completeness for Yao's test

The next-bit test is a particular case of Yao's test for random sequences, and passing it is therefore a necessary condition for passing Yao's test. However, it has also been shown a sufficient condition by Yao.[1]

We prove it now in the case of the probabilistic Turing machine, since Adleman has already done the work of replacing randomization with non-uniformity in his theorem. The case of Boolean circuits cannot be derived from this case (since it involves deciding potentially undecidable problems), but the proof of Adleman's theorem can be easily adapted to the case of non-uniform Boolean circuit families.

Let [math]\displaystyle{ \mathcal M }[/math] be a distinguisher for the probabilistic version of Yao's test, i.e. a probabilistic Turing machine, running in polynomial time, such that there is a polynomial [math]\displaystyle{ Q }[/math] such that for infinitely many [math]\displaystyle{ k }[/math]

[math]\displaystyle{ |p_{k,S}^{\mathcal M}-p_{k,U}^{\mathcal M}|\geq\frac{1}{Q(k)} }[/math]

Let [math]\displaystyle{ R_{k,i}=\{s_1\ldots s_iu_{i+1}\ldots u_{P(k)}| s\in S_k, u\in\{0,1\}^{P(k)}\} }[/math]. We have: [math]\displaystyle{ R_{k,0}=\{0,1\}^{P(k)} }[/math] and [math]\displaystyle{ R_{k,P(k)}=S_k }[/math]. Then, we notice that [math]\displaystyle{ \sum_{i=0}^{P(k)}|p_{k,R_{k,i+1}}^{\mathcal M}-p_{k,R_{k,i}}^{\mathcal M}|\geq |p^{\mathcal M}_{k,R_{k,P(k)}}-p^{\mathcal M}_{k,R_{k,0}}|=|p_{k,S}^{\mathcal M}-p_{k,U}^{\mathcal M}|\geq\frac{1}{Q(k)} }[/math]. Therefore, at least one of the [math]\displaystyle{ |p_{k,R_{k,i+1}}^{\mathcal M}-p_{k,R_{k,i}}^{\mathcal M}| }[/math] should be no smaller than [math]\displaystyle{ \frac{1}{Q(k)P(k)} }[/math].

Next, we consider probability distributions [math]\displaystyle{ \mu_{k,i} }[/math] and [math]\displaystyle{ \overline{\mu_{k,i}} }[/math] on [math]\displaystyle{ R_{k,i} }[/math]. Distribution [math]\displaystyle{ \mu_{k,i} }[/math] is the probability distribution of choosing the [math]\displaystyle{ i }[/math] first bits in [math]\displaystyle{ S_k }[/math] with probability given by [math]\displaystyle{ \mu_k }[/math], and the [math]\displaystyle{ P(k)-i }[/math] remaining bits uniformly at random. We have thus:

[math]\displaystyle{ \mu_{k,i}(w_1\ldots w_{P(k)})=\left(\sum_{s\in S_k, s_1\ldots s_i=w_1\ldots w_i}\mu_k(s)\right)\left(\frac{1}{2}\right)^{P(k)-i} }[/math]

[math]\displaystyle{ \overline{\mu_{k,i}}(w_1\ldots w_{P(k)})=\left(\sum_{s\in S_k, s_1\ldots s_{i-1}(1-s_i)=w_1\ldots w_i}\mu_k(s)\right)\left(\frac{1}{2}\right)^{P(k)-i} }[/math]

We thus have [math]\displaystyle{ \mu_{k,i}=\frac{1}{2}(\mu_{k,i+1}+\overline{\mu_{k,i+1}}) }[/math] (a simple calculus trick shows this), thus distributions [math]\displaystyle{ \mu_{k,i+1} }[/math] and [math]\displaystyle{ \overline{\mu_{k,i+1}} }[/math] can be distinguished by [math]\displaystyle{ \mathcal M }[/math]. Without loss of generality, we can assume that [math]\displaystyle{ p^{\mathcal M}_{\mu_{k,i+1}}-p^{\mathcal M}_{\overline{\mu_{k,i+1}}}\geq\frac{1}{2}+\frac{1}{R(k)} }[/math], with [math]\displaystyle{ R }[/math] a polynomial.

This gives us a possible construction of a Turing machine solving the next-bit test: upon receiving the [math]\displaystyle{ i }[/math] first bits of a sequence, [math]\displaystyle{ \mathcal N }[/math] pads this input with a guess of bit [math]\displaystyle{ l }[/math] and then [math]\displaystyle{ P(k)-i-1 }[/math] random bits, chosen with uniform probability. Then it runs [math]\displaystyle{ \mathcal M }[/math], and outputs [math]\displaystyle{ l }[/math] if the result is [math]\displaystyle{ 1 }[/math], and [math]\displaystyle{ 1-l }[/math] else.

References

  1. 1.0 1.1 Andrew Chi-Chih Yao. Theory and applications of trapdoor functions. In Proceedings of the 23rd IEEE Symposium on Foundations of Computer Science, 1982.
  2. Manuel Blum and Silvio Micali, How to generate cryptographically strong sequences of pseudo-random bits, in SIAM J. COMPUT., Vol. 13, No. 4, November 1984