Roberts cross

From HandWiki
Revision as of 16:46, 6 February 2024 by MedAI (talk | contribs) (linkage)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Technique used in image processing and computer vision for edge detection

The Roberts cross operator is used in image processing and computer vision for edge detection. It was one of the first edge detectors and was initially proposed by Lawrence Roberts in 1963.[1] As a differential operator, the idea behind the Roberts cross operator is to approximate the gradient of an image through discrete differentiation which is achieved by computing the sum of the squares of the differences between diagonally adjacent pixels.

Motivation

According to Roberts, an edge detector should have the following properties: the produced edges should be well-defined, the background should contribute as little noise as possible, and the intensity of edges should correspond as close as possible to what a human would perceive. With these criteria in mind and based on then prevailing psychophysical theory Roberts proposed the following equations:

[math]\displaystyle{ y_{i,j} = \sqrt{x_{i,j}} }[/math]
[math]\displaystyle{ z_{i,j} = \sqrt{(y_{i,j} - y_{i+1,j+1})^2 + (y_{i+1,j} - y_{i, j+1})^2 } }[/math]

where x is the initial intensity value in the image, z is the computed derivative and i,j represent the location in the image.

The results of this operation will highlight changes in intensity in a diagonal direction. One of the most appealing aspects of this operation is its simplicity; the kernel is small and contains only integers. However with the speed of computers today this advantage is negligible and the Roberts cross suffers greatly from sensitivity to noise.[2]

Formulation

In order to perform edge detection with the Roberts operator we first convolve the original image, with the following two kernels:

[math]\displaystyle{ \begin{bmatrix} +1 & 0 \\ 0 & -1\\ \end{bmatrix} \quad \mbox{and} \quad \begin{bmatrix} 0 & +1 \\ -1 & 0 \\ \end{bmatrix}. }[/math]

Let [math]\displaystyle{ I(x,y) }[/math] be a point in the original image and [math]\displaystyle{ G_x(x,y) }[/math] be a point in an image formed by convolving with the first kernel and [math]\displaystyle{ G_y(x,y) }[/math] be a point in an image formed by convolving with the second kernel. The gradient can then be defined as:

[math]\displaystyle{ \nabla I(x,y) = G(x,y) = \sqrt{ G_x^2 + G_y^2 }. }[/math]

The direction of the gradient can also be defined as follows:

[math]\displaystyle{ \Theta(x,y) = \arctan{\left(\frac{G_y(x,y)}{G_x(x,y)}\right)} - \frac{3\pi}{4}. }[/math]

Note that angle of 0° corresponds to a vertical orientation such that the direction of maximum contrast from black to white runs from left to right on the image.

Example comparisons

Here, four different gradient operators are used to estimate the magnitude of the gradient of the test image.

Grayscale test image of brick wall and bike rack
Gradient magnitude from Roberts cross operator
Gradient magnitude from Sobel operator
Gradient magnitude from Scharr operator
Gradient magnitude from Prewitt operator

See also

References

  1. L. Roberts Machine Perception of 3-D Solids, Optical and Electro-optical Information Processing, MIT Press 1965
  2. LS. Davis, "A survey of edge detection techniques", Computer Graphics and Image Processing, vol 4, no. 3, pp 248-260, 1975