Harris affine region detector

From HandWiki

In the fields of computer vision and image analysis, the Harris affine region detector belongs to the category of feature detection. Feature detection is a preprocessing step of several algorithms that rely on identifying characteristic points or interest points so to make correspondences between images, recognize textures, categorize objects or build panoramas.

Overview

The Harris affine detector can identify similar regions between images that are related through affine transformations and have different illuminations. These affine-invariant detectors should be capable of identifying similar regions in images taken from different viewpoints that are related by a simple geometric transformation: scaling, rotation and shearing. These detected regions have been called both invariant and covariant. On one hand, the regions are detected invariant of the image transformation but the regions covariantly change with image transformation.[1] Do not dwell too much on these two naming conventions; the important thing to understand is that the design of these interest points will make them compatible across images taken from several viewpoints. Other detectors that are affine-invariant include Hessian affine region detector, Maximally stable extremal regions, Kadir–Brady saliency detector, edge-based regions (EBR) and intensity-extrema-based regions (IBR).

Mikolajczyk and Schmid (2002) first described the Harris affine detector as it is used today in An Affine Invariant Interest Point Detector.[2] Earlier works in this direction include use of affine shape adaptation by Lindeberg and Garding for computing affine invariant image descriptors and in this way reducing the influence of perspective image deformations,[3] the use affine adapted feature points for wide baseline matching by Baumberg[4] and the first use of scale invariant feature points by Lindeberg;[5][6][7] for an overview of the theoretical background. The Harris affine detector relies on the combination of corner points detected through Harris corner detection, multi-scale analysis through Gaussian scale space and affine normalization using an iterative affine shape adaptation algorithm. The recursive and iterative algorithm follows an iterative approach to detecting these regions:

  1. Identify initial region points using scale-invariant Harris–Laplace detector.
  2. For each initial point, normalize the region to be affine invariant using affine shape adaptation.
  3. Iteratively estimate the affine region: selection of proper integration scale, differentiation scale and spatially localize interest points..
  4. Update the affine region using these scales and spatial localizations.
  5. Repeat step 3 if the stopping criterion is not met.

Algorithm description

Harris–Laplace detector (initial region points)

The Harris affine detector relies heavily on both the Harris measure and a Gaussian scale space representation. Therefore, a brief examination of both follow. For a more exhaustive derivations see corner detection and Gaussian scale space or their associated papers.[6][8]

Harris corner measure

The Harris corner detector algorithm relies on a central principle: at a corner, the image intensity will change largely in multiple directions. This can alternatively be formulated by examining the changes of intensity due to shifts in a local window. Around a corner point, the image intensity will change greatly when the window is shifted in an arbitrary direction. Following this intuition and through a clever decomposition, the Harris detector uses the second moment matrix as the basis of its corner decisions. (See corner detection for a more complete derivation). The matrix [math]\displaystyle{ A }[/math], has also been called the autocorrelation matrix and has values closely related to the derivatives of image intensity.

[math]\displaystyle{ A(\mathbf{x}) = \sum_{p,q} w(p,q) \begin{bmatrix} I_x^2(p,q) & I_x I_y(p,q) \\ I_x I_y(p,q) & I_y^2(p,q)\\ \end{bmatrix} }[/math]

where [math]\displaystyle{ I_{x} }[/math] and [math]\displaystyle{ I_{y} }[/math] are the respective derivatives (of pixel intensity) in the [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] direction at point ([math]\displaystyle{ p }[/math],[math]\displaystyle{ q }[/math]); [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] are the position parameters of the weighting function w. The off-diagonal entries are the product of [math]\displaystyle{ I_{x} }[/math] and [math]\displaystyle{ I_{y} }[/math], while the diagonal entries are squares of the respective derivatives. The weighting function [math]\displaystyle{ w(x,y) }[/math] can be uniform, but is more typically an isotropic, circular Gaussian,

[math]\displaystyle{ w(x,y) = g(x,y,\sigma) = \frac{1}{2\pi \sigma ^2} e^{ \left (-\frac{ x^2 + y^2}{2\sigma ^2} \right )} }[/math]

that acts to average in a local region while weighting those values near the center more heavily.

As it turns out, this [math]\displaystyle{ A }[/math] matrix describes the shape of the autocorrelation measure as due to shifts in window location. Thus, if we let [math]\displaystyle{ \lambda_1 }[/math] and [math]\displaystyle{ \lambda_2 }[/math] be the eigenvalues of [math]\displaystyle{ A }[/math], then these values will provide a quantitative description of how the autocorrelation measure changes in space: its principal curvatures. As Harris and Stephens (1988) point out, the [math]\displaystyle{ A }[/math] matrix centered on corner points will have two large, positive eigenvalues.[8] Rather than extracting these eigenvalues using methods like singular value decomposition, the Harris measure based on the trace and determinant is used:

[math]\displaystyle{ R = \det(A) - \alpha \operatorname{trace}^2(A) = \lambda_1 \lambda_2 - \alpha (\lambda_1 + \lambda_2)^2 }[/math]

where [math]\displaystyle{ \alpha }[/math] is a constant. Corner points have large, positive eigenvalues and would thus have a large Harris measure. Thus, corner points are identified as local maxima of the Harris measure that are above a specified threshold.

[math]\displaystyle{ \begin{align} \{x_c\} = \big\{ x_c \mid R(x_c) \gt R(x_i), \forall x_i \in W(x_c) \big\}, \\ R(x_c) \gt t_\text{threshold} \end{align} }[/math]

where [math]\displaystyle{ \{x_c\} }[/math] are the set of all corner points, [math]\displaystyle{ R(x) }[/math] is the Harris measure calculated at [math]\displaystyle{ x }[/math], [math]\displaystyle{ W(x_c) }[/math] is an 8-neighbor set centered on [math]\displaystyle{ x_c }[/math] and [math]\displaystyle{ t_\text{threshold} }[/math] is a specified threshold.

8-Point neighborhood

Gaussian scale-space

A Gaussian scale space representation of an image is the set of images that result from convolving a Gaussian kernel of various sizes with the original image. In general, the representation can be formulated as:

[math]\displaystyle{ L(\mathbf{x},s) = G(s) \otimes I(\mathbf{x}) }[/math]

where [math]\displaystyle{ G(s) }[/math] is an isotropic, circular Gaussian kernel as defined above. The convolution with a Gaussian kernel smooths the image using a window the size of the kernel. A larger scale, [math]\displaystyle{ s }[/math], corresponds to a smoother resultant image. Mikolajczyk and Schmid (2001) point out that derivatives and other measurements must be normalized across scales.[9] A derivative of order [math]\displaystyle{ m }[/math], [math]\displaystyle{ D_{i_1, ... i_m} }[/math], must be normalized by a factor [math]\displaystyle{ s^m }[/math] in the following manner:

[math]\displaystyle{ D_{i_1, \dots, i_m}(\mathbf{x},s) = s^m L_{i_1, \dots, i_m}(\mathbf{x},s) }[/math]

These derivatives, or any arbitrary measure, can be adapted to a scale space representation by calculating this measure using a set of scales recursively where the [math]\displaystyle{ n }[/math]th scale is [math]\displaystyle{ s_n = k^n s_0 }[/math]. See scale space for a more complete description.

Combining Harris detector across Gaussian scale-space

The Harris-Laplace detector combines the traditional 2D Harris corner detector with the idea of a Gaussian scale space representation in order to create a scale-invariant detector. Harris-corner points are good starting points because they have been shown to have good rotational and illumination invariance in addition to identifying the interesting points of the image.[10] However, the points are not scale invariant and thus the second-moment matrix must be modified to reflect a scale-invariant property. Let us denote, [math]\displaystyle{ M = \mu(\mathbf{x}, \sigma_{\mathit{I}}, \sigma_{\mathit{D}}) }[/math] as the scale adapted second-moment matrix used in the Harris-Laplace detector.

[math]\displaystyle{ M = \mu(\mathbf{x}, \sigma_{\mathit{I}}, \sigma_{\mathit{D}}) = \sigma_D^2 g(\sigma_I) \otimes \begin{bmatrix} L_{x}^2(\mathbf{x}, \sigma_{D}) & L_{x}L_{y}(\mathbf{x}, \sigma_{D}) \\ L_{x}L_{y}(\mathbf{x}, \sigma_{D}) & L_{y}^2(\mathbf{x}, \sigma_{D}) \end{bmatrix} }[/math][11]

where [math]\displaystyle{ g(\sigma_I) }[/math] is the Gaussian kernel of scale [math]\displaystyle{ \sigma_I }[/math] and [math]\displaystyle{ \mathbf{x} = (x,y) }[/math]. Similar to the Gaussian-scale space, [math]\displaystyle{ L(\mathbf{x}) }[/math] is the Gaussian-smoothed image. The [math]\displaystyle{ \mathbf{\otimes} }[/math] operator denotes convolution. [math]\displaystyle{ L_{x}(\mathbf{x},\sigma_{D}) }[/math] and [math]\displaystyle{ L_{y}(\mathbf{x}, \sigma_{D}) }[/math] are the derivatives in their respective direction applied to the smoothed image and calculated using a Gaussian kernel with scale [math]\displaystyle{ \sigma_D }[/math]. In terms of our Gaussian scale-space framework, the [math]\displaystyle{ \sigma_I }[/math] parameter determines the current scale at which the Harris corner points are detected.

Building upon this scale-adapted second-moment matrix, the Harris-Laplace detector is a twofold process: applying the Harris corner detector at multiple scales and automatically choosing the characteristic scale.

Multi-scale Harris corner points

The algorithm searches over a fixed number of predefined scales. This set of scales is defined as:

[math]\displaystyle{ {\sigma_1 \dots \sigma_n} = {k^{1}\sigma_0 \dots k^{n}\sigma_0} }[/math]

Mikolajczyk and Schmid (2004) use [math]\displaystyle{ k = 1.4 }[/math]. For each integration scale, [math]\displaystyle{ \sigma_I }[/math], chosen from this set, the appropriate differentiation scale is chosen to be a constant factor of the integration scale: [math]\displaystyle{ \sigma_D = s\sigma_I }[/math]. Mikolajczyk and Schmid (2004) used [math]\displaystyle{ s = 0.7 }[/math].[11] Using these scales, the interest points are detected using a Harris measure on the [math]\displaystyle{ \mu(\mathbf{x}, \sigma_{\mathit{I}}, \sigma_{\mathit{D}}) }[/math] matrix. The cornerness, like the typical Harris measure, is defined as:

[math]\displaystyle{ \mathit{cornerness} = \det(\mu(\mathbf{x}, \sigma_{\mathit{I}}, \sigma_{\mathit{D}})) - \alpha \operatorname{trace}^2(\mu(\mathbf{x}, \sigma_{\mathit{I}}, \sigma_{\mathit{D}})) }[/math]

Like the traditional Harris detector, corner points are those local (8 point neighborhood) maxima of the cornerness that are above a specified threshold.

Characteristic scale identification

An iterative algorithm based on Lindeberg (1998) both spatially localizes the corner points and selects the characteristic scale.[6] The iterative search has three key steps, that are carried for each point [math]\displaystyle{ \mathbf{x} }[/math] that were initially detected at scale [math]\displaystyle{ \sigma_I }[/math] by the multi-scale Harris detector ([math]\displaystyle{ k }[/math] indicates the [math]\displaystyle{ kth }[/math] iteration):

  • Choose the scale [math]\displaystyle{ \sigma_I^{(k+1)} }[/math] that maximizes the Laplacian-of-Gaussians (LoG) over a predefined range of neighboring scales. The neighboring scales are typically chosen from a range that is within a two scale-space neighborhood. That is, if the original points were detected using a scaling factor of [math]\displaystyle{ 1.4 }[/math] between successive scales, a two scale-space neighborhood is the range [math]\displaystyle{ t \in [0.7, \dots, 1.4] }[/math]. Thus the Gaussian scales examined are: [math]\displaystyle{ \sigma_I^{(k+1)} = t \sigma_I^k }[/math]. The LoG measurement is defined as:
[math]\displaystyle{ |\operatorname{LoG}(\mathbf{x}, \sigma_I)| = \sigma_I^2 \left|L_{xx}(\mathbf{x}, \sigma_I) + L_{yy}(\mathbf{x},\sigma_I)\right| }[/math]
where [math]\displaystyle{ L_{xx} }[/math] and [math]\displaystyle{ L_{yy} }[/math] are the second derivatives in their respective directions.[12] The [math]\displaystyle{ \sigma_I^2 }[/math] factor (as discussed above in Gaussian scale-space) is used to normalize the LoG across scales and make these measures comparable, thus making a maximum relevant. Mikolajczyk and Schmid (2001) demonstrate that the LoG measure attains the highest percentage of correctly detected corner points in comparison to other scale-selection measures.[9] The scale which maximizes this LoG measure in the two scale-space neighborhood is deemed the characteristic scale, [math]\displaystyle{ \sigma_I^{(k+1)} }[/math], and used in subsequent iterations. If no extrema, or maxima of the LoG is found, this point is discarded from future searches.
  • Using the characteristic scale, the points are spatially localized. That is to say, the point [math]\displaystyle{ \mathbf{x}^{(k+1)} }[/math] is chosen such that it maximizes the Harris corner measure (cornerness as defined above) within an 8×8 local neighborhood.
  • Stopping criterion: [math]\displaystyle{ \sigma_I^{(k+1)} == \sigma_I^{(k)} }[/math] and [math]\displaystyle{ \mathbf{x}^{(k+1)} == \mathbf{x}^{(k)} }[/math].

If the stopping criterion is not met, then the algorithm repeats from step 1 using the new [math]\displaystyle{ k+1 }[/math] points and scale. When the stopping criterion is met, the found points represent those that maximize the LoG across scales (scale selection) and maximize the Harris corner measure in a local neighborhood (spatial selection).

Affine-invariant points

Mathematical theory

The Harris-Laplace detected points are scale invariant and work well for isotropic regions that are viewed from the same viewing angle. In order to be invariant to arbitrary affine transformations (and viewpoints), the mathematical framework must be revisited. The second-moment matrix [math]\displaystyle{ \mathbf{\mu} }[/math] is defined more generally for anisotropic regions:

[math]\displaystyle{ \mu (\mathbf{x}, \Sigma_I, \Sigma_D) = \det(\Sigma_D) g(\Sigma_I) * ( \nabla L(\mathbf{x}, \Sigma_D)\nabla L(\mathbf{x}, \Sigma_D)^T) }[/math]

where [math]\displaystyle{ \Sigma_I }[/math] and [math]\displaystyle{ \Sigma_D }[/math] are covariance matrices defining the differentiation and the integration Gaussian kernel scales. Although this may look significantly different from the second-moment matrix in the Harris-Laplace detector; it is in fact, identical. The earlier [math]\displaystyle{ \mu }[/math] matrix was the 2D-isotropic version in which the covariance matrices [math]\displaystyle{ \Sigma_I }[/math] and [math]\displaystyle{ \Sigma_D }[/math] were 2x2 identity matrices multiplied by factors [math]\displaystyle{ \sigma_I }[/math] and [math]\displaystyle{ \sigma_D }[/math], respectively. In the new formulation, one can think of Gaussian kernels as a multivariate Gaussian distributions as opposed to a uniform Gaussian kernel. A uniform Gaussian kernel can be thought of as an isotropic, circular region. Similarly, a more general Gaussian kernel defines an ellipsoid. In fact, the eigenvectors and eigenvalues of the covariance matrix define the rotation and size of the ellipsoid. Thus we can easily see that this representation allows us to completely define an arbitrary elliptical affine region over which we want to integrate or differentiate.

The goal of the affine invariant detector is to identify regions in images that are related through affine transformations. We thus consider a point [math]\displaystyle{ \mathbf{x}_L }[/math] and the transformed point [math]\displaystyle{ \mathbf{x}_R = A\mathbf{x}_L }[/math], where A is an affine transformation. In the case of images, both [math]\displaystyle{ \mathbf{x}_R }[/math] and [math]\displaystyle{ \mathbf{x}_L }[/math] live in [math]\displaystyle{ R^2 }[/math] space. The second-moment matrices are related in the following manner:[3]

[math]\displaystyle{ \begin{align} \mu(\mathbf{x}_L,\Sigma_{I,L}, \Sigma_{D,L}) & {} = A^T \mu (\mathbf{x}_R, \Sigma_{I,R}, \Sigma_{D,R}) A \\ M_L & {} = \mu(\mathbf{x}_L,\Sigma_{I,L}, \Sigma_{D,L}) \\ M_R & {} = \mu (\mathbf{x}_R, \Sigma_{I,R}, \Sigma_{D,R}) \\ M_L & {} = A^T M_R A \\ \Sigma_{I,R} & {} = A \Sigma_{I,L} A^T\text{ and }\Sigma_{D,R} = A \Sigma_{D,L} A^T \end{align} }[/math]

where [math]\displaystyle{ \Sigma_{I,b} }[/math] and [math]\displaystyle{ \Sigma_{D,b} }[/math] are the covariance matrices for the [math]\displaystyle{ b }[/math] reference frame. If we continue with this formulation and enforce that

[math]\displaystyle{ \begin{align} \Sigma_{I,L} = \sigma_I M_L^{-1} \\ \Sigma_{D,L} = \sigma_D M_L^{-1} \end{align} }[/math]

where [math]\displaystyle{ \sigma_I }[/math] and [math]\displaystyle{ \sigma_D }[/math] are scalar factors, one can show that the covariance matrices for the related point are similarly related:

[math]\displaystyle{ \begin{align} \Sigma_{I,R} = \sigma_I M_R^{-1} \\ \Sigma_{D,R} = \sigma_D M_R^{-1} \end{align} }[/math]

By requiring the covariance matrices to satisfy these conditions, several nice properties arise. One of these properties is that the square root of the second-moment matrix, [math]\displaystyle{ M^{\tfrac{1}{2}} }[/math] will transform the original anisotropic region into isotropic regions that are related simply through a pure rotation matrix [math]\displaystyle{ R }[/math]. These new isotropic regions can be thought of as a normalized reference frame. The following equations formulate the relation between the normalized points [math]\displaystyle{ x_R^' }[/math] and [math]\displaystyle{ x_L^' }[/math]:

[math]\displaystyle{ \begin{align} A = M_R^{-\tfrac{1}{2}} R M_L^{\tfrac{1}{2}} \\ x_R^' = M_R^{\tfrac{1}{2}}x_R \\ x_L^' = M_L^{\tfrac{1}{2}}x_L \\ x_L^' = R x_R^'\\ \end{align} }[/math]

The rotation matrix can be recovered using gradient methods likes those in the SIFT descriptor. As discussed with the Harris detector, the eigenvalues and eigenvectors of the second-moment matrix, [math]\displaystyle{ M = \mu(\mathbf{x}, \Sigma_I, \Sigma_D) }[/math] characterize the curvature and shape of the pixel intensities. That is, the eigenvector associated with the largest eigenvalue indicates the direction of largest change and the eigenvector associated with the smallest eigenvalue defines the direction of least change. In the 2D case, the eigenvectors and eigenvalues define an ellipse. For an isotropic region, the region should be circular in shape and not elliptical. This is the case when the eigenvalues have the same magnitude. Thus a measure of the isotropy around a local region is defined as the following:

[math]\displaystyle{ \mathcal{Q} = \frac{\lambda_{\min}(M)}{\lambda_{\max}(M)} }[/math]

where [math]\displaystyle{ \lambda }[/math] denote eigenvalues. This measure has the range [math]\displaystyle{ [0 \dots 1] }[/math]. A value of [math]\displaystyle{ 1 }[/math] corresponds to perfect isotropy.

Iterative algorithm

Using this mathematical framework, the Harris affine detector algorithm iteratively discovers the second-moment matrix that transforms the anisotropic region into a normalized region in which the isotropic measure is sufficiently close to one. The algorithm uses this shape adaptation matrix, [math]\displaystyle{ U }[/math], to transform the image into a normalized reference frame. In this normalized space, the interest points' parameters (spatial location, integration scale and differentiation scale) are refined using methods similar to the Harris-Laplace detector. The second-moment matrix is computed in this normalized reference frame and should have an isotropic measure close to one at the final iteration. At every [math]\displaystyle{ k }[/math]th iteration, each interest region is defined by several parameters that the algorithm must discover: the [math]\displaystyle{ U^{(k)} }[/math] matrix, position [math]\displaystyle{ \mathbf{x}^{(k)} }[/math], integration scale [math]\displaystyle{ \sigma_I^{(k)} }[/math] and differentiation scale [math]\displaystyle{ \sigma_D^{(k)} }[/math]. Because the detector computes the second-moment matrix in the transformed domain, it's convenient to denote this transformed position as [math]\displaystyle{ \mathbf{x}_w^{(k)} }[/math] where [math]\displaystyle{ U^{(k)}\mathbf{x}_w^{(k)} = \mathbf{x^{(k)}} }[/math].

  1. The detector initializes the search space with points detected by the Harris-Laplace detector.
    [math]\displaystyle{ U^{(0)} = \mathit{identity} }[/math] and [math]\displaystyle{ \mathbf{x}^{(0)} }[/math], [math]\displaystyle{ \sigma_D^{(0)} }[/math], and [math]\displaystyle{ \sigma_I^{(0)} }[/math] are those from the Harris-Laplace detector.
  2. Apply the previous iteration shape adaptation matrix, [math]\displaystyle{ U^{(k-1)} }[/math] to generate the normalized reference frame, [math]\displaystyle{ U^{(k-1)}\mathbf{x}_w^{(k-1)} = \mathbf{x}^{(k-1)} }[/math]. For the first iteration, you apply [math]\displaystyle{ U^{(0)} }[/math].
  3. Select the integration scale, [math]\displaystyle{ \sigma_I^{(k)} }[/math], using a method similar to the Harris-Laplace detector. The scale is chosen as the scale that maximizes the Laplacian of Gaussian (LoG). The search space of the scales are those within two scale-spaces of the previous iterations scale.
    [math]\displaystyle{ \sigma_I^{(k)} = \underset{{\sigma_I = t\sigma_I^{(k-1)}\atop t \in [0.7, \dots, 1.4]}}{\operatorname{argmax}} \, \sigma_I^2 \det(L_{xx}(\mathbf{x}, \sigma_I) + L_{yy}(\mathbf{x},\sigma_I)) }[/math]
    It's important to note that the integration scale in the [math]\displaystyle{ U-normalized }[/math] space differs significantly than the non-normalized space. Therefore, it is necessary to search for the integration scale as opposed to using the scale in the non-normalized space.
  4. Select the differentiation scale, [math]\displaystyle{ \sigma_D^{(k)} }[/math]. In order to reduce the search space and degrees of freedom, the differentiation scale is taken to be related to the integration scale through a constant factor: [math]\displaystyle{ \sigma_D^{k} = s \sigma_I^{k} }[/math]. For obvious reasons, the constant factor is less than one. Mikolajczyk and Schmid (2001) note that a too small factor will make smoothing (integration) too significant in comparison to differentiation and a factor that's too large will not allow for the integration to average the covariance matrix.[9] It is common to choose [math]\displaystyle{ s \in [0.5,0.75] }[/math]. From this set, the chosen scale will maximize the isotropic measure [math]\displaystyle{ \mathcal{Q} = \frac{\lambda_{min}(\mu)}{\lambda_{max}(\mu)} }[/math].
    [math]\displaystyle{ \sigma_D^{(k)} = \underset{\sigma_D = s\sigma_I^{(k)},\; s \in [0.5, \dots, 0.75]}{\operatorname{argmax}} \, \frac{\lambda_{\min}(\mu(\mathbf{x}_w^{(k)}, \sigma_I^{k}, \sigma_D))}{\lambda_{\max}(\mu(\mathbf{x}_w^{(k)}, \sigma_I^{k}, \sigma_D))} }[/math]
    where [math]\displaystyle{ \mu(\mathbf{x}_w^{(k)}, \sigma_I^{k}, \sigma_D) }[/math] is the second-moment matrix evaluated in the normalized reference frame. This maximization processes causes the eigenvalues to converge to the same value.
  5. Spatial Localization: Select the point [math]\displaystyle{ \mathbf{x}_w^{(k)} }[/math] that maximizes the Harris corner measure ([math]\displaystyle{ \mathit{cornerness} }[/math]) within an 8-point neighborhood around the previous [math]\displaystyle{ \mathbf{x}_w^{(k-1)} }[/math] point.
    [math]\displaystyle{ \mathbf{x}_w^{(k)} = \underset{\mathbf{x}_w \in W(\mathbf{x}_w^{(k-1)})}{\operatorname{argmax}} \, \det(\mu(\mathbf{x}_w, \sigma_I^{k}, \sigma_D^{(k)})) - \alpha \operatorname{trace}^2(\mu(\mathbf{x}_w, \sigma_I^{k}, \sigma_D^{(k)})) }[/math]
    where [math]\displaystyle{ \mu }[/math] is the second-moment matrix as defined above. The window [math]\displaystyle{ W(\mathbf{x}_w^{(k-1)}) }[/math] is the set of 8-nearest neighbors of the previous iteration's point in the normalized reference frame. Because our spatial localization was done in the [math]\displaystyle{ U }[/math]-normalized reference frame, the newly chosen point must be transformed back to the original reference frame. This is achieved by transforming a displacement vector and adding this to the previous point:
    [math]\displaystyle{ \mathbf{x}^{(k)} = \mathbf{x}^{(k-1)} + U^{(k-1)}\cdot (\mathbf{x}_w^{(k)} - \mathbf{x}_w^{(k-1)}) }[/math]
  6. As mentioned above, the square-root of the second-moment matrix defines the transformation matrix that generates the normalized reference frame. We thus need to save this matrix: [math]\displaystyle{ \mu_i^{(k)} = \mu^{-\tfrac{1}{2}}(\mathbf{x}_w^{(k)}, \sigma_I^{(k)}, \sigma_D^{(k)}) }[/math]. The transformation matrix [math]\displaystyle{ U }[/math] is updated: [math]\displaystyle{ U^{(k)} = \mu_i^{(k)}\cdot U^{(k-1)} }[/math]. In order to ensure that the image gets sampled correctly and we are expanding the image in the direction of the least change (smallest eigenvalue), we fix the maximum eigenvalue: [math]\displaystyle{ \lambda_{max}(U^{(k)}) = 1 }[/math]. Using this updating method, one can easily see that the final [math]\displaystyle{ U }[/math] matrix takes the following form:
    [math]\displaystyle{ U = \prod_{k} \mu_i^{(k)} \cdot U^{(0)} = \prod_{k} (\mu^{-\tfrac{1}{2}})^{(k)} \cdot U^{(0)} }[/math]
  7. If the stopping criterion is not met, continue to the next iteration at step 2. Because the algorithm iteratively solves for the [math]\displaystyle{ U-normalization }[/math] matrix that transforms an anisotropic region into an isotropic region, it makes sense to stop when the isotropic measure, [math]\displaystyle{ \mathcal{Q} = \frac{\lambda_{\min}(\mu)}{\lambda_{\max}(\mu)} }[/math], is sufficiently close to its maximum value 1. Sufficiently close implies the following stopping condition:
    [math]\displaystyle{ 1 - \frac{\lambda_{\min}(\mu_i^{(k)})}{\lambda_{\max}(\mu_i^{(k)})} \lt \varepsilon_C }[/math]
    Mikolajczyk and Schmid (2004) had good success with [math]\displaystyle{ \epsilon_C = 0.05 }[/math].

Computation and implementation

The computational complexity of the Harris-Affine detector is broken into two parts: initial point detection and affine region normalization. The initial point detection algorithm, Harris-Laplace, has complexity [math]\displaystyle{ \mathcal{O}(n) }[/math] where [math]\displaystyle{ n }[/math] is the number of pixels in the image. The affine region normalization algorithm automatically detects the scale and estimates the shape adaptation matrix, [math]\displaystyle{ U }[/math]. This process has complexity [math]\displaystyle{ \mathcal{O}((m+k)p) }[/math], where [math]\displaystyle{ p }[/math] is the number of initial points, [math]\displaystyle{ m }[/math] is the size of the search space for the automatic scale selection and [math]\displaystyle{ k }[/math] is the number of iterations required to compute the [math]\displaystyle{ U }[/math] matrix.[11]

Some methods exist to reduce the complexity of the algorithm at the expense of accuracy. One method is to eliminate the search in the differentiation scale step. Rather than choose a factor [math]\displaystyle{ s }[/math] from a set of factors, the sped-up algorithm chooses the scale to be constant across iterations and points: [math]\displaystyle{ \sigma_D = s \sigma_I,\; s = constant }[/math]. Although this reduction in search space might decrease the complexity, this change can severely effect the convergence of the [math]\displaystyle{ U }[/math] matrix.

Analysis

Convergence

One can imagine that this algorithm might identify duplicate interest points at multiple scales. Because the Harris affine algorithm looks at each initial point given by the Harris-Laplace detector independently, there is no discrimination between identical points. In practice, it has been shown that these points will ultimately all converge to the same interest point. After finishing identifying all interest points, the algorithm accounts for duplicates by comparing the spatial coordinates ([math]\displaystyle{ \mathbf{x} }[/math]), the integration scale [math]\displaystyle{ \sigma_I }[/math], the isotropic measure [math]\displaystyle{ \tfrac{\lambda_{\min}(U)}{\lambda_{\max}(U)} }[/math] and skew.[11] If these interest point parameters are similar within a specified threshold, then they are labeled duplicates. The algorithm discards all these duplicate points except for the interest point that's closest to the average of the duplicates. Typically 30% of the Harris affine points are distinct and dissimilar enough to not be discarded.[11]

Mikolajczyk and Schmid (2004) showed that often the initial points (40%) do not converge. The algorithm detects this divergence by stopping the iterative algorithm if the inverse of the isotropic measure is larger than a specified threshold: [math]\displaystyle{ \tfrac{\lambda_{\max}(U)}{\lambda_{\min}(U)} \gt t_\text{diverge} }[/math]. Mikolajczyk and Schmid (2004) use [math]\displaystyle{ t_{diverge} = 6 }[/math]. Of those that did converge, the typical number of required iterations was 10.[2]

Quantitative measure

Quantitative analysis of affine region detectors take into account both the accuracy of point locations and the overlap of regions across two images. Mioklajcyzk and Schmid (2004) extend the repeatability measure of Schmid et al. (1998) as the ratio of point correspondences to minimum detected points of the two images.[11][13]

[math]\displaystyle{ R_\text{score} = \frac{C(A,B)}{\min(n_A, n_B)} }[/math]

where [math]\displaystyle{ C(A,B) }[/math] are the number of corresponding points in images [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math]. [math]\displaystyle{ n_B }[/math] and [math]\displaystyle{ n_A }[/math] are the number of detected points in the respective images. Because each image represents 3D space, it might be the case that the one image contains objects that are not in the second image and thus whose interest points have no chance of corresponding. In order to make the repeatability measure valid, one remove these points and must only consider points that lie in both images; [math]\displaystyle{ n_A }[/math] and [math]\displaystyle{ n_B }[/math] only count those points such that [math]\displaystyle{ x_A = H \cdot x_B }[/math]. For a pair of two images related through a homography matrix [math]\displaystyle{ H }[/math], two points, [math]\displaystyle{ \mathbf{x_a} }[/math] and [math]\displaystyle{ \mathbf{x_b} }[/math] are said to correspond if:

Overlap region of two elliptical regions.
  1. Error in pixel location is less than 1.5 pixels: [math]\displaystyle{ \| \mathbf{x_a} - H\cdot \mathbf{x_b} \| \lt 1.5 }[/math]
  2. The overlap error of the two affine points ([math]\displaystyle{ \epsilon_S }[/math]) must be less than a specified threshold (typically 40%).[1] For affine regions, this overlap error is the following:
    [math]\displaystyle{ \epsilon_S = 1 - \frac{\mu_a \cap (H^T \mu_b H)}{\mu_a \cup (H^T \mu_b H)} }[/math]

    where [math]\displaystyle{ \mu_a }[/math] and [math]\displaystyle{ \mu_b }[/math] are the recovered elliptical regions whose points satisfy: [math]\displaystyle{ \mu^T \mathbf{x} \mu = 1 }[/math]. Basically, this measure takes a ratio of areas: the area of overlap (intersection) and the total area (union). Perfect overlap would have a ratio of one and have an [math]\displaystyle{ \epsilon_S = 0 }[/math]. Different scales effect the region of overlap and thus must be taken into account by normalizing the area of each region of interest. Regions with an overlap error as high as 50% are viable detectors to be matched with a good descriptor.[1]

    A second measure, a matching score, more practically assesses the detector's ability to identify matching points between images. Mikolajczyk and Schmid (2005) use a SIFT descriptor to identify matching points. In addition to being the closest points in SIFT-space, two matched points must also have a sufficiently small overlap error (as defined in the repeatability measure). The matching score is the ratio of the number of matched points and the minimum of the total detected points in each image:

    [math]\displaystyle{ M_{score} = \frac{M(A,B)}{\min(n_A,n_B)} }[/math],[1]
    where [math]\displaystyle{ M(A,B) }[/math] are the number of matching points and [math]\displaystyle{ n_B }[/math] and [math]\displaystyle{ n_A }[/math] are the number of detected regions in the respective images.

Robustness to affine and other transformations

Mikolajczyk et al. (2005) have done a thorough analysis of several state-of-the-art affine region detectors: Harris affine, Hessian affine, MSER,[14] IBR & EBR[15] and salient[16] detectors.[1] Mikolajczyk et al. analyzed both structured images and textured images in their evaluation. Linux binaries of the detectors and their test images are freely available at their webpage. A brief summary of the results of Mikolajczyk et al. (2005) follow; see A comparison of affine region detectors for a more quantitative analysis.

  • Viewpoint Angle Change: The Harris affine detector has reasonable (average) robustness to these types of changes. The detector maintains a repeatability score of above 50% up until a viewpoint angle of above 40 degrees. The detector tends to detect a high number of repeatable and matchable regions even under a large viewpoint change.
  • Scale Change: The Harris affine detector remains very consistent under scale changes. Although the number of points declines considerably at large scale changes (above 2.8), the repeatability (50-60%) and matching scores (25-30%) remain very constant especially with textured images. This is consistent with the high-performance of the automatic scale selection iterative algorithm.
  • Blurred Images: The Harris affine detector remains very stable under image blurring. Because the detector does not rely on image segmentation or region boundaries, the repeatability and matching scores remain constant.
  • JPEG Artifacts: The Harris affine detector degrades similar to other affine detectors: repeatability and matching scores drop significantly above 80% compression.
  • Illumination Changes: The Harris affine detector, like other affine detectors, is very robust to illumination changes: repeatability and matching scores remain constant under decreasing light. This should be expected because the detectors rely heavily on relative intensities (derivatives) and not absolute intensities.

General trends

  • Harris affine region points tend to be small and numerous. Both the Harris-Affine detector and Hessian-Affine consistently identify double the number repeatable points as other affine detectors: ~1000 regions for an 800x640 image.[1] Small regions are less likely to be occluded but have a smaller chance of overlapping neighboring regions.
  • The Harris affine detector responds well to textured scenes in which there are a lot of corner-like parts. However, for some structured scenes, like buildings, the Harris-Affine detector performs very well. This is complementary to MSER that tends to do better with well structured (segmentable) scenes.
  • Overall the Harris affine detector performs very well, but still behind MSER and Hessian-Affine in all cases but blurred images.
  • Harris-Affine and Hessian-Affine detectors are less accurate than others: their repeatability score increases as the overlap threshold is increased.
  • The detected affine-invariant regions may still differ in their rotation and illumination. Any descriptor that uses these regions must account for the invariance when using the regions for matching or other comparisons.

Applications

Software packages

  • Affine Covariant Features: K. Mikolajczyk maintains a web page that contains Linux binaries of the Harris-Affine detector in addition to other detectors and descriptors. Matlab code is also available that can be used to illustrate and compute the repeatability of various detectors. Code and images are also available to duplicate the results found in the Mikolajczyk et al. (2005) paper.
  • lip-vireo - binary code for Linux, Windows and SunOS from VIREO research group. See more from the homepage

External links

  • [1] - Presentation slides from Mikolajczyk et al. on their 2005 paper.
  • [2] - Cordelia Schmid's Computer Vision Lab
  • [3] - Code, test Images, bibliography of Affine Covariant Features maintained by Krystian Mikolajczyk and the Visual Geometry Group from the Robotics group at the University of Oxford.
  • [4] - Bibliography of feature (and blob) detectors maintained by USC Institute for Robotics and Intelligent Systems
  • [5] - Digital implementation of Laplacian of Gaussian

See also

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir and L. Van Gool, A comparison of affine region detectors. In IJCV 65(1/2):43-72, 2005
  2. 2.0 2.1 Mikolajcyk, K. and Schmid, C. 2002. An affine invariant interest point detector. In Proceedings of the 8th International Conference on Computer Vision, Vancouver, Canada.
  3. 3.0 3.1 T. Lindeberg and J. Garding (1997). "Shape-adapted smoothing in estimation of 3-{D} depth cues from affine distortions of local 2-{D} structure". Image and Vision Computing 15: pp 415—434.
  4. A. Baumberg (2000). "Reliable feature matching across widely separated views". Proceedings of IEEE Conference on Computer Vision and Pattern Recognition: pages I:1774—1781.
  5. Lindeberg, Tony, Scale-Space Theory in Computer Vision, Kluwer Academic Publishers, 1994, ISBN:0-7923-9418-6
  6. 6.0 6.1 6.2 T. Lindeberg (1998). "Feature detection with automatic scale selection". International Journal of Computer Vision 30 (2): pp 77—116.
  7. Lindeberg, T. (2008). "Scale-space". in Wah, Benjamin. Encyclopedia of Computer Science and Engineering. IV. John Wiley and Sons. pp. 2495–2504. doi:10.1002/9780470050118.ecse609. ISBN 978-0470050118. http://kth.diva-portal.org/smash/record.jsf?pid=diva2%3A441147&dswid=-488. 
  8. 8.0 8.1 C. Harris and M. Stephens (1988). "A combined corner and edge detector". Proceedings of the 4th Alvey Vision Conference: pages 147—151.
  9. 9.0 9.1 9.2 K. Mikolajczyk and C. Schmid. Indexing based on scale invariant interest points. In Proceedings of the 8th International Conference on Computer Vision, Vancouver, Canada, pages 525-531, 2001.
  10. Schmid, C., Mohr, R., and Bauckhage, C. 2000. Evaluation of interest point detectors. International Journal of Computer Vision, 37(2):151-172.
  11. 11.0 11.1 11.2 11.3 11.4 11.5 Mikolajczyk, K. and Schmid, C. 2004. Scale & affine invariant interest point detectors. International Journal on Computer Vision 60(1):63-86.
  12. Spatial Filters: Laplacian/Laplacian of Gaussian
  13. C. Schmid, R. Mohr, and C. Bauckhage. Comparing and evaluating interest points. In International Conference on Computer Vision, pp. 230-135, 1998.
  14. J.Matas, O. Chum, M. Urban, and T. Pajdla, Robust wide baseline stereo from maximally stable extremal regions. In BMVC p. 384-393, 2002.
  15. T. Tuytelaars and L. Van Gool, Matching widely separated views based on affine invariant regions. In IJCV 59(1):61-85, 2004.
  16. T. Kadir, A. Zisserman, and M. Brady, An affine invariant salient region detector. In ECCV p. 404-416, 2004.
  17. http://staff.science.uva.nl/~gevers/pub/overview.pdf[bare URL PDF]
  18. R. Datta, J. Li, and J. Z. Wang, “Content-based image retrieval - Approaches and trends of the new age,” In Proc. Int. Workshop on Multimedia Information Retrieval, pp. 253-262, 2005.IEEE Transactions on Multimedia, vol. 7, no. 1, pp. 127-142, 2005.
  19. J. Sivic and A. Zisserman. Video google: A text retrieval approach to object matching in videos. In Proceedings of the International Conference on Computer Vision, Nice, France, 2003.
  20. J. Sivic and A. Zisserman. Video data mining using configurations of viewpoint invariant regions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington DC, USA, pp. 488-495, 2004.
  21. G. Dorko and C. Schmid. Selection of scale invariant neighborhoods for object class recognition. In Proceedings of International Conference on Computer Vision, Nice, France, pp. 634-640, 2003.
  22. Beril Sirmacek and Cem Unsalan (January 2011). "A probabilistic framework to detect buildings in aerial and satellite images". IEEE Transactions on Geoscience and Remote Sensing 49 (1): 211–221. doi:10.1109/TGRS.2010.2053713. Bibcode2011ITGRS..49..211S. https://elib.dlr.de/64838/1/5523977.pdf.