Fast syndrome-based hash

From HandWiki
Short description: Family of cryptographic hash functions
Fast syndrome-based hash function (FSB)
General
DesignersDaniel Augot, Matthieu Finiasz, Nicolas Sendrier
First published2003
Derived fromMcEliece cryptosystem and Niederreiter cryptosystem
SuccessorsImproved fast syndrome-based hash function
Related toSyndrome-based hash function
Detail
Digest sizesScalable

In cryptography, the fast syndrome-based hash functions (FSB) are a family of cryptographic hash functions introduced in 2003 by Daniel Augot, Matthieu Finiasz, and Nicolas Sendrier. [1] Unlike most other cryptographic hash functions in use today, FSB can to a certain extent be proven to be secure. More exactly, it can be proven that breaking FSB is at least as difficult as solving a certain NP-complete problem known as regular syndrome decoding so FSB is provably secure. Though it is not known whether NP-complete problems are solvable in polynomial time, it is often assumed that they are not.

Several versions of FSB have been proposed, the latest of which was submitted to the SHA-3 cryptography competition but was rejected in the first round. Though all versions of FSB claim provable security, some preliminary versions were eventually broken. [2] The design of the latest version of FSB has however taken this attack into account and remains secure to all currently known attacks.

As usual, provable security comes at a cost. FSB is slower than traditional hash functions and uses quite a lot of memory, which makes it impractical on memory constrained environments. Furthermore, the compression function used in FSB needs a large output size to guarantee security. This last problem has been solved in recent versions by simply compressing the output by another compression function called Whirlpool. However, though the authors argue that adding this last compression does not reduce security, it makes a formal security proof impossible. [3]

Description of the hash function

We start with a compression function [math]\displaystyle{ \phi }[/math] with parameters [math]\displaystyle{ {n,r,w} }[/math] such that [math]\displaystyle{ n \gt w }[/math] and [math]\displaystyle{ w \log(n/w) \gt r }[/math]. This function will only work on messages with length [math]\displaystyle{ s = w\log(n/w) }[/math]; [math]\displaystyle{ r }[/math] will be the size of the output. Furthermore, we want [math]\displaystyle{ n,r,w,s }[/math] and [math]\displaystyle{ \log(n/w) }[/math] to be natural numbers, where [math]\displaystyle{ \log }[/math] denotes the binary logarithm. The reason for [math]\displaystyle{ w \cdot \log(n/w) \gt r }[/math] is that we want [math]\displaystyle{ \phi }[/math] to be a compression function, so the input must be larger than the output. We will later use the Merkle–Damgård construction to extend the domain to inputs of arbitrary lengths.

The basis of this function consists of a (randomly chosen) binary [math]\displaystyle{ r \times n }[/math] matrix [math]\displaystyle{ H }[/math] which acts on a message of [math]\displaystyle{ n }[/math] bits by matrix multiplication. Here we encode the [math]\displaystyle{ w\log(n/w) }[/math]-bit message as a vector in [math]\displaystyle{ (\mathbf{F}_2)^n }[/math], the [math]\displaystyle{ n }[/math]-dimensional vector space over the field of two elements, so the output will be a message of [math]\displaystyle{ r }[/math] bits.

For security purposes as well as to get a faster hash speed we want to use only “regular words of weight [math]\displaystyle{ w }[/math]” as input for our matrix.

Definitions

  • A message is called a word of weight [math]\displaystyle{ w }[/math] and length [math]\displaystyle{ n }[/math] if it consists of [math]\displaystyle{ n }[/math] bits and exactly [math]\displaystyle{ w }[/math] of those bits are ones.
  • A word of weight [math]\displaystyle{ w }[/math] and length [math]\displaystyle{ n }[/math] is called regular if in every interval [math]\displaystyle{ [(i-1)(n/w), i(n/w)) }[/math] it contains exactly one nonzero entry for all [math]\displaystyle{ 0 \lt i \lt w+1 }[/math]. More intuitively, this means that if we chop up the message in w equal parts, then each part contains exactly one nonzero entry.

The compression function

There are exactly [math]\displaystyle{ (n/w)^w }[/math] different regular words of weight [math]\displaystyle{ w }[/math] and length [math]\displaystyle{ n }[/math], so we need exactly [math]\displaystyle{ \log((n/w)^w)= w \log(n/w) = s }[/math] bits of data to encode these regular words. We fix a bijection from the set of bit strings of length [math]\displaystyle{ s }[/math] to the set of regular words of weight [math]\displaystyle{ w }[/math] and length [math]\displaystyle{ n }[/math] and then the FSB compression function is defined as follows :

  1. input: a message of size [math]\displaystyle{ s }[/math]
  2. convert to regular word of length [math]\displaystyle{ n }[/math] and weight [math]\displaystyle{ w }[/math]
  3. multiply by the matrix [math]\displaystyle{ H }[/math]
  4. output: hash of size [math]\displaystyle{ r }[/math]

This version is usually called syndrome-based compression. It is very slow and in practice done in a different and faster way resulting in fast syndrome-based compression. We split [math]\displaystyle{ H }[/math] into sub-matrices [math]\displaystyle{ H_i }[/math] of size [math]\displaystyle{ r \times n/w }[/math] and we fix a bijection from the bit strings of length [math]\displaystyle{ w\log(n/w) }[/math] to the set of sequences of [math]\displaystyle{ w }[/math] numbers between 1 and [math]\displaystyle{ n/w }[/math]. This is equivalent to a bijection to the set of regular words of length [math]\displaystyle{ n }[/math] and weight [math]\displaystyle{ w }[/math], since we can see such a word as a sequence of numbers between 1 and [math]\displaystyle{ n/w }[/math]. The compression function looks as follows:

  1. Input: message of size [math]\displaystyle{ s }[/math]
  2. Convert [math]\displaystyle{ s }[/math] to a sequence of [math]\displaystyle{ w }[/math] numbers [math]\displaystyle{ s_1,\dots,s_w }[/math] between 1 and [math]\displaystyle{ n/w }[/math]
  3. Add the corresponding columns of the matrices [math]\displaystyle{ H_i }[/math] to obtain a binary string a length [math]\displaystyle{ r }[/math]
  4. Output: hash of size [math]\displaystyle{ r }[/math]

We can now use the Merkle–Damgård construction to generalize the compression function to accept inputs of arbitrary lengths.

Example of the compression

Situation and initialization: Hash a message [math]\displaystyle{ s = 010011 }[/math] using [math]\displaystyle{ 4 \times 12 }[/math] matrix H
[math]\displaystyle{ H = \left(\begin{array}{llllcllllcllll} 1&0&1&1 &~& 0&1&0&0 &~& 1&0&1&1 \\ 0&1&0&0 &~& 0&1&1&1 &~& 0&1&0&0 \\ 0&1&1&1 &~& 0&1&0&0 &~& 1&0&1&0 \\ 1&1&0&0 &~& 1&0&1&1 &~& 0&0&0&1 \end{array}\right) }[/math]
that is separated into [math]\displaystyle{ w = 3 }[/math] sub-blocks [math]\displaystyle{ H_1 }[/math], [math]\displaystyle{ H_2 }[/math], [math]\displaystyle{ H_3 }[/math].

Algorithm:

  1. We split the input [math]\displaystyle{ s }[/math] into [math]\displaystyle{ w = 3 }[/math] parts of length [math]\displaystyle{ \log_2(12/3) = 2 }[/math] and we get [math]\displaystyle{ s_1 = 01 }[/math], [math]\displaystyle{ s_2 = 00 }[/math], [math]\displaystyle{ s_3 = 11 }[/math].
  2. We convert each [math]\displaystyle{ s_i }[/math] into an integer and get [math]\displaystyle{ s_1 = 1 }[/math], [math]\displaystyle{ s_2 = 0 }[/math], [math]\displaystyle{ s_3 = 3 }[/math].
  3. From the first sub-matrix [math]\displaystyle{ H_1 }[/math], we pick the column 2, from the second sub-matrix [math]\displaystyle{ H_2 }[/math] the column 1 and from the third sub-matrix the column 4.
  4. We add the chosen columns and obtain the result [math]\displaystyle{ r = 0111 \oplus 0001 \oplus 1001 = 1111 }[/math].

Security proof of FSB

The Merkle–Damgård construction is proven to base its security only on the security of the used compression function. So we only need to show that the compression function [math]\displaystyle{ \phi }[/math] is secure.

A cryptographic hash function needs to be secure in three different aspects:

  1. Pre-image resistance: Given a Hash h it should be hard to find a message m such that Hash(m)=h
  2. Second pre-image resistance: Given a message m1 it should be hard to find a message m2 such that Hash(m1) = Hash(m2)
  3. Collision resistance: It should be hard to find two different messages m1 and m2 such that Hash(m1)=Hash(m2)

Note that if an adversary can find a second pre-image, then it can certainly find a collision. This means that if we can prove our system to be collision resistant, it will certainly be second-pre-image resistant.

Usually in cryptography hard means something like “almost certainly beyond the reach of any adversary who must be prevented from breaking the system”. We will however need a more exact meaning of the word hard. We will take hard to mean “The runtime of any algorithm that finds a collision or pre-image will depend exponentially on size of the hash value”. This means that by relatively small additions to the hash size, we can quickly reach high security.

Pre-image resistance and regular syndrome decoding (RSD)

As said before, the security of FSB depends on a problem called regular syndrome decoding (RSD). Syndrome decoding is originally a problem from coding theory but its NP-completeness makes it a nice application for cryptography. Regular syndrome decoding is a special case of syndrome decoding and is defined as follows:

Definition of RSD: given [math]\displaystyle{ w }[/math] matrices [math]\displaystyle{ H_i }[/math] of dimension [math]\displaystyle{ r \times (n/w) }[/math] and a bit string [math]\displaystyle{ S }[/math] of length [math]\displaystyle{ r }[/math] such that there exists a set of [math]\displaystyle{ w }[/math] columns, one in each [math]\displaystyle{ H_i }[/math], summing to [math]\displaystyle{ S }[/math]. Find such a set of columns.

This problem has been proven to be NP-complete by a reduction from 3-dimensional matching. Again, though it is not known whether there exist polynomial time algorithms for solving NP-complete problems, none are known and finding one would be a huge discovery.

It is easy to see that finding a pre-image of a given hash [math]\displaystyle{ S }[/math] is exactly equivalent to this problem, so the problem of finding pre-images in FSB must also be NP-complete.

We still need to prove collision resistance. For this we need another NP-complete variation of RSD: 2-regular null syndrome decoding.

Collision resistance and 2-regular null syndrome decoding (2-RNSD)

Definition of 2-RNSD: Given [math]\displaystyle{ w }[/math] matrices [math]\displaystyle{ H_i }[/math] of dimension [math]\displaystyle{ r \times (n/w) }[/math] and a bit string [math]\displaystyle{ S }[/math] of length [math]\displaystyle{ r }[/math] such that there exists a set of [math]\displaystyle{ w' }[/math] columns, two or zero in each [math]\displaystyle{ H_i }[/math], summing to zero. [math]\displaystyle{ (0 \lt w' \lt 2w) }[/math]. Find such a set of columns.

2-RNSD has also been proven to be NP-complete by a reduction from 3-dimensional matching.

Just like RSD is in essence equivalent to finding a regular word [math]\displaystyle{ w }[/math] such that [math]\displaystyle{ Hw = S }[/math], 2-RNSD is equivalent to finding a 2-regular word [math]\displaystyle{ w' }[/math] such that [math]\displaystyle{ Hw'=0 }[/math]. A 2-regular word of length [math]\displaystyle{ n }[/math] and weight [math]\displaystyle{ w }[/math] is a bit string of length [math]\displaystyle{ n }[/math] such that in every interval [math]\displaystyle{ [(i-1)w , iw) }[/math] exactly two or zero entries are equal to 1. Note that a 2-regular word is just a sum of two regular words.

Suppose that we have found a collision, so we have Hash(m1) = Hash(m2) with [math]\displaystyle{ m_1\neq m_2 }[/math]. Then we can find two regular words [math]\displaystyle{ w_1 }[/math] and [math]\displaystyle{ w_2 }[/math] such that [math]\displaystyle{ Hw_1=Hw_2 }[/math] . We then have [math]\displaystyle{ H(w_1+w_2)= Hw_1 + Hw_2 =2Hw_1=0 }[/math]; [math]\displaystyle{ (w_1 + w_2) }[/math] is a sum of two different regular words and so must be a 2-regular word of which the hash is zero, so we have solved an instance of 2-RNSD. We conclude that finding collisions in FSB is at least as difficult as solving 2-RNSD and so must be NP-complete.

The latest versions of FSB use the compression function Whirlpool to further compress the hash output. Though this cannot be proven, the authors argue that this last compression does not reduce security. Note that even if one were able to find collisions in Whirlpool, one would still need to find the collisions pre-images in the original FSB compression function to find a collision in FSB.

Examples

Solving RSD, we are in the opposite situation as when hashing. Using the same values as in the previous example, we are given [math]\displaystyle{ H }[/math] separated into [math]\displaystyle{ w=3 }[/math] sub-blocks and a string [math]\displaystyle{ r = 1111 }[/math]. We are asked to find in each sub-block exactly one column such that they would all sum to [math]\displaystyle{ r }[/math]. The expected answer is thus [math]\displaystyle{ s_1 = 1 }[/math], [math]\displaystyle{ s_2 = 0 }[/math], [math]\displaystyle{ s_3 = 3 }[/math]. This is known to be hard to compute for large matrices.

In 2-RNSD we want to find in each sub-block not one column, but two or zero such that they would sum up to 0000 (and not to [math]\displaystyle{ r }[/math]). In the example, we might use column (counting from 0) 2 and 3 from [math]\displaystyle{ H_1 }[/math], no column from [math]\displaystyle{ H_2 }[/math] column 0 and 2 from [math]\displaystyle{ H_3 }[/math]. More solutions are possible, for example might use no columns from [math]\displaystyle{ H_3 }[/math].

Linear cryptanalysis

The provable security of FSB means that finding collisions is NP-complete. But the proof is a reduction to a problem with asymptotically hard worst-case complexity. This offers only limited security assurance as there still can be an algorithm that easily solves the problem for a subset of the problem space. For example, there exists a linearization method that can be used to produce collisions of in a matter of seconds on a desktop PC for early variants of FSB with claimed 2^128 security. It is shown that the hash function offers minimal pre-image or collision resistance when the message space is chosen in a specific way.

Practical security results

The following table shows the complexity of the best known attacks against FSB.

Output size (bits) Complexity of collision search Complexity of inversion
160 2100.3 2163.6
224 2135.3 2229.0
256 2190.0 2261.0
384 2215.5 2391.5
512 2285.6 2527.4

Genesis

FSB is a speed-up version of syndrome-based hash function (SB). In the case of SB the compression function is very similar to the encoding function of Niederreiter's version of McEliece cryptosystem. Instead of using the parity check matrix of a permuted Goppa code, SB uses a random matrix [math]\displaystyle{ H }[/math]. From the security point of view this can only strengthen the system.

Other properties

  • Both the block size of the hash function and the output size are completely scalable.
  • The speed can be adjusted by adjusting the number of bitwise operations used by FSB per input bit.
  • The security can be adjusted by adjusting the output size.
  • Bad instances exist and one must take care when choosing the matrix [math]\displaystyle{ H }[/math].
  • The matrix used in the compression function may grow large in certain situations. This might be a limitation when trying to use FSB on memory constrained devices. This problem was solved in the related hash function called Improved FSB, which is still provably secure, but relies on slightly stronger assumptions.

Variants

In 2007, IFSB was published.[3] In 2010, S-FSB was published, which is 30% faster than the original. [4]

In 2011, D. J. Bernstein and Tanja Lange published RFSB, which is 10x faster than the original FSB-256. [5] RFSB was shown to run very fast on the Spartan 6 FPGA, reaching throughputs of around 5 Gbit/s.> [6]

References

  1. Augot, D.; Finiasz, M.; Sendrier, N. (2003), A fast provably secure cryptographic hash function, https://www.researchgate.net/publication/220336328 
  2. Saarinen, Markku-Juhani O. (2007), "Linearization Attacks Against Syndrome Based Hashes", Progress in Cryptology – INDOCRYPT 2007, Lecture Notes in Computer Science, 4859, pp. 1–9, doi:10.1007/978-3-540-77026-8_1, ISBN 978-3-540-77025-1, https://eprint.iacr.org/2007/295.pdf, retrieved 2022-11-12 
  3. 3.0 3.1 Finiasz, M.; Gaborit, P.; Sendrier, N. (2007), Improved Fast Syndrome Based Cryptographic Hash Functions, ECRYPT Hash Workshop 2007, http://events.iaik.tugraz.at/HashWorkshop07/papers/Finiasz_ImprovedFastSyndromeBasedCryptographicHashFunction.pdf, retrieved 2010-01-04 
  4. Meziani, Mohammed; Dagdelen, Özgür; Cayrel, Pierre-Louis; El Yousfi Alaoui, Sidi Mohamed (2011). "S-FSB: An Improved Variant of the FSB Hash Family". Information Security and Assurance. Communications in Computer and Information Science. 200. pp. 132–145. doi:10.1007/978-3-642-23141-4_13. ISBN 978-3-642-23140-7. https://www.informatik.tu-darmstadt.de/fileadmin/user_upload/Group_CASED/Publikationen/2010/S-FSB_An_Improved_Variant_of_the_FSB_Hash_Family.pdf. Retrieved 2014-12-10. 
  5. Bernstein, Daniel J.; Lange, Tanja; Peters, Christiane; Schwabe, Peter (2011), "Really Fast Syndrome-Based Hashing", Progress in Cryptology – AFRICACRYPT 2011, Lecture Notes in Computer Science, 6737, pp. 134–152, doi:10.1007/978-3-642-21969-6_9, ISBN 978-3-642-21968-9, http://cr.yp.to/codes/rfsb-20110214.pdf, retrieved 2022-11-12 
  6. von Maurich, Ingo; Güneysu, Tim (2012), "Embedded Syndrome-Based Hashing", Progress in Cryptology - INDOCRYPT 2012, Lecture Notes in Computer Science, 7668, pp. 339–357, doi:10.1007/978-3-642-34931-7_20, ISBN 978-3-642-34930-0, https://www.ei.rub.de/media/sh/veroeffentlichungen/2012/12/10/embedded_syndrome-based_hashing.pdf, retrieved 2014-12-10 

External links