FM-index
In computer science, an FM-index is a compressed full-text substring index based on the Burrows–Wheeler transform, with some similarities to the suffix array. It was created by Paolo Ferragina and Giovanni Manzini,[1] who describe it as an opportunistic data structure as it allows compression of the input text while still permitting fast substring queries. The name stands for Full-text index in Minute space.[2] It can be used to efficiently find the number of occurrences of a pattern within the compressed text, as well as locate the position of each occurrence. The query time, as well as the required storage space, has a sublinear complexity with respect to the size of the input data.
The original authors have devised improvements to their original approach and dubbed it "FM-Index version 2".[3] A further improvement, the alphabet-friendly FM-index, combines the use of compression boosting and wavelet trees[4] to significantly reduce the space usage for large alphabets.
The FM-index has found use in, among other places, bioinformatics.[5]
Background
Using an index is a common strategy to efficiently search a large body of text. When the text is larger than what reasonably fits within a computer's main memory, there is a need to compress not only the text but also the index. When the FM-index was introduced, there were several suggested solutions that were based on traditional compression methods and tried to solve the compressed matching problem. In contrast, the FM-index is a compressed self-index, which means that it compresses the data and indexes it at the same time.
FM-index data structure
An FM-index is created by first taking the Burrows–Wheeler transform (BWT) of the input text. For example, the BWT of the string T = "abracadabra$" is "ard$rcaaaabb", and here it is represented by the matrix M where each row is a rotation of the text, and the rows have been sorted lexicographically. The transform corresponds to the concatenation of the characters from the last column (labeled L).
I | F | L | |
---|---|---|---|
1 | $ | abracadabr | a |
2 | a | $abracadab | r |
3 | a | bra$abraca | d |
4 | a | bracadabra | $ |
5 | a | cadabra$ab | r |
6 | a | dabra$abra | c |
7 | b | ra$abracad | a |
8 | b | racadabra$ | a |
9 | c | adabra$abr | a |
10 | d | abra$abrac | a |
11 | r | a$abracada | b |
12 | r | acadabra$a | b |
The BWT in itself allows for some compression with, for instance, move to front and Huffman encoding, but the transform has even more uses. The rows in the matrix are essentially the sorted suffixes of the text and the first column F of the matrix shares similarities with suffix arrays. How the suffix array relates to the BWT lies at the heart of the FM-index.
It is possible to make a last-to-first column mapping LF(i) from an index i to an index j, such that F[j] = L[i], with the help of a table C[c] and a function Occ(c, k).
|
|
The last-to-first mapping can now be defined as LF(i) = C[L[i]] + Occ(L[i], i). For instance, on row 9, L is a and the same a can be found on row 5 in the first column F, so LF(9) should be 5 and LF(9) = C[a] + Occ(a, 9) = 5. For any row i of the matrix, the character in the last column L[i] precedes the character in the first column F[i] also in T. Finally, if L[i] = T[k], then L[LF(i)] = T[k - 1], and using the equality it is possible to extract a string of T from L. The FM-index itself is a compression of the string L together with C and Occ in some form, as well as information that maps a selection of indices in L to positions in the original string T. |
|
Count
The operation count takes a pattern P[1..p] and returns the number of occurrences of that pattern in the original text T. Since the rows of matrix M are sorted, and it contains every suffix of T, the occurrences of pattern P will be next to each other in a single continuous range. The operation iterates backwards over the pattern. For every character in the pattern, the range that has the character as a suffix is found. For example, the count of the pattern "bra" in "abracadabra" follows these steps:
- The first character we look for is a, the last character in the pattern. The initial range is set to [C[a] + 1 .. C[a+1]] = [2..6]. This range over L represents every character of T that has a suffix beginning with a.
- The next character to look for is r. The new range is [C[r] + Occ(r, start-1) + 1 .. C[r] + Occ(r, end)] = [10 + 0 + 1 .. 10 + 2] = [11..12], if start is the index of the beginning of the range and end is the end. This range over L is all the characters of T that have suffixes beginning with ra.
- The last character to look at is b. The new range is [C[b] + Occ(b, start-1) + 1 .. C[b] + Occ(b, end)] = [6 + 0 + 1 .. 6 + 2] = [7..8]. This range over L is all the characters that have a suffix that begins with bra. Now that the whole pattern has been processed, the count is the same as the size of the range: 8 - 7 + 1 = 2.
If the range becomes empty or the range boundaries cross each other before the whole pattern has been looked up, the pattern does not occur in T. Because Occ(c, k) can be performed in constant time, count can complete in linear time in the length of the pattern: O(p) time.
Locate
The operation locate takes as input an index of a character in L and returns its position i in T. For instance locate(7) = 8. To locate every occurrence of a pattern, first the range of character is found whose suffix is the pattern in the same way the count operation found the range. Then the position of every character in the range can be located.
To map an index in L to one in T, a subset of the indices in L are associated with a position in T. If L[j] has a position associated with it, locate(j) is trivial. If it's not associated, the string is followed with LF(i) until an associated index is found. By associating a suitable number of indices, an upper bound can be found. Locate can be implemented to find occ occurrences of a pattern P[1..p] in a text T[1..u] in O(p + occ logε u) time with [math]\displaystyle{ O \left(H_k(T) + {{\log\log u}\over{\log^\epsilon u}} \right) }[/math] bits per input symbol for any k ≥ 0.[1]
Applications
DNA read mapping
FM index with backtracking has been successfully (>2000 citations) applied to approximate string matching/sequence alignment, See Bowtie http://bowtie-bio.sourceforge.net/index.shtml
See also
References
- ↑ 1.0 1.1 1.2 Paolo Ferragina and Giovanni Manzini (2000). "Opportunistic Data Structures with Applications". Proceedings of the 41st Annual Symposium on Foundations of Computer Science. p.390.
- ↑ Paolo Ferragina and Giovanni Manzini (2005). "Indexing Compressed Text". Journal of the ACM, 52, 4 (Jul. 2005). p. 553
- ↑ Ferragina, Paolo; Venturini, Rossano (September 2005). "Fm-index version 2". Dipartimento di Informatica, University of Pisa, Italy. http://pages.di.unipi.it/ferragina/Libraries/fmindexV2/index.html.
- ↑ P. Ferragina, G. Manzini, V. Mäkinen and G. Navarro. An Alphabet-Friendly FM-index. In Proc. SPIRE'04, pages 150-160. LNCS 3246.
- ↑ Simpson, Jared T.; Durbin, Richard (2010-06-15). "Efficient construction of an assembly string graph using the FM-index". Bioinformatics 26 (12): i367–i373. doi:10.1093/bioinformatics/btq217. ISSN 1367-4803. PMID 20529929.
Original source: https://en.wikipedia.org/wiki/FM-index.
Read more |