Summed-area table

From HandWiki
Using a summed-area table (2.) of an order-6 magic square (1.) to sum up a subrectangle of its values; each coloured spot highlights the sum inside the rectangle of that colour.

A summed-area table is a data structure and algorithm for quickly and efficiently generating the sum of values in a rectangular subset of a grid. In the image processing domain, it is also known as an integral image. It was introduced to computer graphics in 1984 by Frank Crow for use with mipmaps. In computer vision it was popularized by Lewis[1] and then given the name "integral image" and prominently used within the Viola–Jones object detection framework in 2001. Historically, this principle is very well known in the study of multi-dimensional probability distribution functions, namely in computing 2D (or ND) probabilities (area under the probability distribution) from the respective cumulative distribution functions.[2]

The algorithm

As the name suggests, the value at any point (xy) in the summed-area table is the sum of all the pixels above and to the left of (xy), inclusive:[3][4] [math]\displaystyle{ I(x,y) = \sum_{\begin{smallmatrix} x' \le x \\ y' \le y\end{smallmatrix}} i(x',y') }[/math] where [math]\displaystyle{ i(x,y) }[/math] is the value of the pixel at (x,y).

The summed-area table can be computed efficiently in a single pass over the image, as the value in the summed-area table at (xy) is just:[5] [math]\displaystyle{ I(x,y) = i(x,y) + I(x,y-1) + I(x-1,y) - I(x-1,y-1) }[/math] (Noted that the summed matrix is calculated from top left corner)

A description of computing a sum in the summed-area table data structure/algorithm

Once the summed-area table has been computed, evaluating the sum of intensities over any rectangular area requires exactly four array references regardless of the area size. That is, the notation in the figure at right, having A = (x0, y0), B = (x1, y0), C = (x0, y1) and D = (x1, y1), the sum of i(x,y) over the rectangle spanned by A, B, C, and D is: [math]\displaystyle{ \sum_{\begin{smallmatrix} x_0 \lt x \le x_1 \\ y_0 \lt y \le y_1 \end{smallmatrix}} i(x,y) = I(D) + I(A) - I(B) - I(C) }[/math]

Extensions

This method is naturally extended to continuous domains.[2]

The method can be also extended to high-dimensional images.[6] If the corners of the rectangle are [math]\displaystyle{ x^p }[/math] with [math]\displaystyle{ p }[/math] in [math]\displaystyle{ \{0,1\}^d }[/math], then the sum of image values contained in the rectangle are computed with the formula [math]\displaystyle{ \sum_{p\in\{0,1\}^d }(-1)^{d-\|p\|_1} I(x^p) }[/math] where [math]\displaystyle{ I(x) }[/math] is the integral image at [math]\displaystyle{ x }[/math] and [math]\displaystyle{ d }[/math] the image dimension. The notation [math]\displaystyle{ x^p }[/math] correspond in the example to [math]\displaystyle{ d=2 }[/math], [math]\displaystyle{ A=x^{(0,0)} }[/math], [math]\displaystyle{ B=x^{(1,0)} }[/math], [math]\displaystyle{ C=x^{(1,1)} }[/math] and [math]\displaystyle{ D=x^{(0,1)} }[/math]. In neuroimaging, for example, the images have dimension [math]\displaystyle{ d=3 }[/math] or [math]\displaystyle{ d=4 }[/math], when using voxels or voxels with a time-stamp.

This method has been extended to high-order integral image as in the work of Phan et al.[7] who provided two, three, or four integral images for quickly and efficiently calculating the standard deviation (variance), skewness, and kurtosis of local block in the image. This is detailed below:

To compute variance or standard deviation of a block, we need two integral images: [math]\displaystyle{ I(x,y) = \sum_{\begin{smallmatrix} x' \le x \\ y' \le y\end{smallmatrix}} i(x',y') }[/math] [math]\displaystyle{ I^2(x,y) = \sum_{\begin{smallmatrix} x' \le x \\ y' \le y\end{smallmatrix}} i^2(x',y') }[/math] The variance is given by: [math]\displaystyle{ \operatorname{Var}(X) = \frac{1}{n} \sum_{i=1}^n (x_i - \mu)^2. }[/math] Let [math]\displaystyle{ S_1 }[/math] and [math]\displaystyle{ S_2 }[/math] denote the summations of block [math]\displaystyle{ ABCD }[/math] of [math]\displaystyle{ I }[/math] and [math]\displaystyle{ I^2 }[/math], respectively. [math]\displaystyle{ S_1 }[/math] and [math]\displaystyle{ S_2 }[/math] are computed quickly by integral image. Now, we manipulate the variance equation as: [math]\displaystyle{ \begin{align} \operatorname{Var}(X) &= \frac{1}{n} \sum_{i=1}^n \left(x_i^2 - 2 \mu x_i + \mu^2\right) \\[1ex] &= \frac{1}{n} \left[\sum_{i=1}^n x_i^2 - 2 \sum_{i=1}^n \mu x_i + \sum_{i=1}^n \mu^2\right] \\[1ex] &= \frac{1}{n} \left[\sum_{i=1}^n x_i^2 - 2\sum_{i=1}^n \mu x_i + n \mu^2\right] \\[1ex] &= \frac{1}{n} \left[\sum_{i=1}^n x_i^2 - 2 \mu \sum_{i=1}^n x_i + n \mu^2\right] \\[1ex] &= \frac{1}{n} \left[S_2 - 2 \frac{S_1}{n} S_1 + n \left(\frac{S_1}{n}\right)^2\right] \\[1ex] &= \frac{1}{n} \left[S_2 - \frac{S_1^2}{n}\right] \end{align} }[/math] Where [math]\displaystyle{ \mu=S_1/n }[/math] and [math]\displaystyle{ S_2 = \sum_{i=1}^n x_i^2 }[/math].

Similar to the estimation of the mean ([math]\displaystyle{ \mu }[/math]) and variance ([math]\displaystyle{ \operatorname{Var} }[/math]), which requires the integral images of the first and second power of the image respectively (i.e. [math]\displaystyle{ I, I^2 }[/math]); manipulations similar to the ones mentioned above can be made to the third and fourth powers of the images (i.e. [math]\displaystyle{ I^3(x,y), I^4(x,y) }[/math].) for obtaining the skewness and kurtosis.[7] But one important implementation detail that must be kept in mind for the above methods, as mentioned by F Shafait et al.[8] is that of integer overflow occurring for the higher order integral images in case 32-bit integers are used.

See also

References

  1. Lewis, J.P. (1995). "Fast template matching". 120–123. 
  2. 2.0 2.1 Finkelstein, Amir; neeratsharma (2010). "Double Integrals By Summing Values Of Cumulative Distribution Function". http://demonstrations.wolfram.com/DoubleIntegralsBySummingValuesOfACumulativeDistributionFunct/. 
  3. Crow, Franklin (1984). "Summed-area tables for texture mapping". pp. 207–212. doi:10.1145/800031.808600. https://dl.acm.org/doi/pdf/10.1145/800031.808600. 
  4. Viola, Paul; Jones, Michael (2002). "Robust Real-time Object Detection". http://www.hpl.hp.com/techreports/Compaq-DEC/CRL-2001-1.pdf. 
  5. BADGERATI (2010-09-03). "Computer Vision – The Integral Image". https://computersciencesource.wordpress.com/2010/09/03/computer-vision-the-integral-image/. 
  6. Tapia, Ernesto (January 2011). "A note on the computation of high-dimensional integral images". Pattern Recognition Letters 32 (2): 197–201. doi:10.1016/j.patrec.2010.10.007. Bibcode2011PaReL..32..197T. 
  7. 7.0 7.1 Phan, Thien; Sohoni, Sohum; Larson, Eric C.; Chandler, Damon M. (22 April 2012). "Performance-analysis-based acceleration of image quality assessment". 2012 IEEE Southwest Symposium on Image Analysis and Interpretation. pp. 81–84. doi:10.1109/SSIAI.2012.6202458. ISBN 978-1-4673-1830-3. http://vision.okstate.edu/pubs/ssiai_tp_1.pdf. 
  8. Shafait, Faisal; Keysers, Daniel; M. Breuel, Thomas (January 2008). "Efficient implementation of local adaptive thresholding techniques using integral images". Electronic Imaging. Document Recognition and Retrieval XV 6815: 681510–681510–6. doi:10.1117/12.767755. Bibcode2008SPIE.6815E..10S. http://www.csse.uwa.edu.au/~shafait/papers/Shafait-efficient-binarization-SPIE08.pdf. 

External links

Lecture videos