Discrete Laplace operator

From HandWiki
Short description: Analog of the continuous Laplace operato

In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph (having a finite number of edges and vertices), the discrete Laplace operator is more commonly called the Laplacian matrix.

The discrete Laplace operator occurs in physics problems such as the Ising model and loop quantum gravity, as well as in the study of discrete dynamical systems. It is also used in numerical analysis as a stand-in for the continuous Laplace operator. Common applications include image processing,[1] where it is known as the Laplace filter, and in machine learning for clustering and semi-supervised learning on neighborhood graphs.

Definitions

Graph Laplacians

There are various definitions of the discrete Laplacian for graphs, differing by sign and scale factor (sometimes one averages over the neighboring vertices, other times one just sums; this makes no difference for a regular graph). The traditional definition of the graph Laplacian, given below, corresponds to the negative continuous Laplacian on a domain with a free boundary.

Let [math]\displaystyle{ G = (V,E) }[/math] be a graph with vertices [math]\displaystyle{ V }[/math] and edges [math]\displaystyle{ E }[/math]. Let [math]\displaystyle{ \phi\colon V\to R }[/math] be a function of the vertices taking values in a ring. Then, the discrete Laplacian [math]\displaystyle{ \Delta }[/math] acting on [math]\displaystyle{ \phi }[/math] is defined by

[math]\displaystyle{ (\Delta \phi)(v)=\sum_{w:\,d(w,v)=1}\left[\phi(v)-\phi(w)\right] }[/math]

where [math]\displaystyle{ d(w,v) }[/math] is the graph distance between vertices w and v. Thus, this sum is over the nearest neighbors of the vertex v. For a graph with a finite number of edges and vertices, this definition is identical to that of the Laplacian matrix. That is, [math]\displaystyle{ \phi }[/math] can be written as a column vector; and so [math]\displaystyle{ \Delta\phi }[/math] is the product of the column vector and the Laplacian matrix, while [math]\displaystyle{ (\Delta \phi)(v) }[/math] is just the v'th entry of the product vector.

If the graph has weighted edges, that is, a weighting function [math]\displaystyle{ \gamma\colon E\to R }[/math] is given, then the definition can be generalized to

[math]\displaystyle{ (\Delta_\gamma\phi)(v)=\sum_{w:\,d(w,v)=1}\gamma_{wv}\left[\phi(v)-\phi(w)\right] }[/math]

where [math]\displaystyle{ \gamma_{wv} }[/math] is the weight value on the edge [math]\displaystyle{ wv\in E }[/math].

Closely related to the discrete Laplacian is the averaging operator:

[math]\displaystyle{ (M\phi)(v)=\frac{1}{\deg v}\sum_{w:\,d(w,v)=1}\phi(w). }[/math]

Mesh Laplacians

In addition to considering the connectivity of nodes and edges in a graph, mesh Laplace operators take into account the geometry of a surface (e.g. the angles at the nodes). For a two-dimensional manifold triangle mesh, the Laplace-Beltrami operator of a scalar function [math]\displaystyle{ u }[/math] at a vertex [math]\displaystyle{ i }[/math] can be approximated as

[math]\displaystyle{ (\Delta u)_{i} \equiv \frac{1}{2A_i} \sum_{j} (\cot \alpha_{ij} + \cot \beta_{ij}) (u_j - u_i), }[/math]

where the sum is over all adjacent vertices [math]\displaystyle{ j }[/math] of [math]\displaystyle{ i }[/math], [math]\displaystyle{ \alpha_{ij} }[/math] and [math]\displaystyle{ \beta_{ij} }[/math] are the two angles opposite of the edge [math]\displaystyle{ ij }[/math], and [math]\displaystyle{ A_i }[/math] is the vertex area of [math]\displaystyle{ i }[/math]; that is, e.g. one third of the summed areas of triangles incident to [math]\displaystyle{ i }[/math]. It is important to note that the sign of the discrete Laplace-Beltrami operator is conventionally opposite the sign of the ordinary Laplace operator. The above cotangent formula can be derived using many different methods among which are piecewise linear finite elements, finite volumes, and discrete exterior calculus [2] (PDF download: [1]).

To facilitate computation, the Laplacian is encoded in a matrix [math]\displaystyle{ L\in\mathbb{R}^{|V|\times|V|} }[/math] such that [math]\displaystyle{ Lu = (\Delta u)_i }[/math]. Let [math]\displaystyle{ C }[/math] be the (sparse) cotangent matrix with entries

[math]\displaystyle{ C_{ij} = \begin{cases} \frac{1}{2}(\cot \alpha_{ij} + \cot \beta_{ij}) & ij \text{ is an edge, that is } j \in N(i), \\ -\sum\limits_{k \in N(i)}C_{ik} & i = j, \\ 0 & \text{otherwise} \end{cases} }[/math]

Where [math]\displaystyle{ N(i) }[/math] denotes the neighborhood of [math]\displaystyle{ i }[/math].

And let [math]\displaystyle{ M }[/math] be the diagonal mass matrix [math]\displaystyle{ M }[/math] whose [math]\displaystyle{ i }[/math]-th entry along the diagonal is the vertex area [math]\displaystyle{ A_i }[/math]. Then [math]\displaystyle{ L=M^{-1}C }[/math] is the sought discretization of the Laplacian.

A more general overview of mesh operators is given in.[3]

Finite differences

Approximations of the Laplacian, obtained by the finite-difference method or by the finite-element method, can also be called discrete Laplacians. For example, the Laplacian in two dimensions can be approximated using the five-point stencil finite-difference method, resulting in

[math]\displaystyle{ \Delta f(x,y) \approx \frac{f(x-h,y) + f(x+h,y) + f(x,y-h) + f(x,y+h) - 4f(x,y)}{h^2}, }[/math]

where the grid size is h in both dimensions, so that the five-point stencil of a point (xy) in the grid is

[math]\displaystyle{ \{(x-h, y), (x, y), (x+h, y), (x, y-h), (x, y+h)\}. }[/math]

If the grid size h = 1, the result is the negative discrete Laplacian on the graph, which is the square lattice grid. There are no constraints here on the values of the function f(x, y) on the boundary of the lattice grid, thus this is the case of no source at the boundary, that is, a no-flux boundary condition (aka, insulation, or homogeneous Neumann boundary condition). The control of the state variable at the boundary, as f(x, y) given on the boundary of the grid (aka, Dirichlet boundary condition), is rarely used for graph Laplacians, but is common in other applications.

Multidimensional discrete Laplacians on rectangular cuboid regular grids have very special properties, e.g., they are Kronecker sums of one-dimensional discrete Laplacians, see Kronecker sum of discrete Laplacians, in which case all its eigenvalues and eigenvectors can be explicitly calculated.

Finite-element method

In this approach, the domain is discretized into smaller elements, often triangles or tetrahedra, but other elements such as rectangles or cuboids are possible. The solution space is then approximated using so called form-functions of a pre-defined degree. The differential equation containing the Laplace operator is then transformed into a variational formulation, and a system of equations is constructed (linear or eigenvalue problems). The resulting matrices are usually very sparse and can be solved with iterative methods.

Image processing

Discrete Laplace operator is often used in image processing e.g. in edge detection and motion estimation applications.[4] The discrete Laplacian is defined as the sum of the second derivatives Laplace operator and calculated as sum of differences over the nearest neighbours of the central pixel. Since derivative filters are often sensitive to noise in an image, the Laplace operator is often preceded by a smoothing filter (such as a Gaussian filter) in order to remove the noise before calculating the derivative. The smoothing filter and Laplace filter are often combined into a single filter.[5]

Implementation via operator discretization

For one-, two- and three-dimensional signals, the discrete Laplacian can be given as convolution with the following kernels:

1D filter: [math]\displaystyle{ \vec{D}^2_x=\begin{bmatrix}1 & -2 & 1\end{bmatrix} }[/math],
2D filter: [math]\displaystyle{ \mathbf{D}^2_{xy}=\begin{bmatrix}0 & 1 & 0\\1 & -4 & 1\\0 & 1 & 0\end{bmatrix} }[/math].

[math]\displaystyle{ \mathbf{D}^2_{xy} }[/math] corresponds to the (Five-point stencil) finite-difference formula seen previously. It is stable for very smoothly varying fields, but for equations with rapidly varying solutions more stable and isotropic form of the Laplacian operator is required,[6] such as the nine-point stencil, which includes the diagonals:

2D filter: [math]\displaystyle{ \mathbf{D}^2_{xy}=\begin{bmatrix}0.25 & 0.5 & 0.25\\0.5 & -3 & 0.5\\0.25 & 0.5 & 0.25\end{bmatrix} }[/math],
3D filter: [math]\displaystyle{ \mathbf{D}^2_{xyz} }[/math] using seven-point stencil is given by:
first plane = [math]\displaystyle{ \begin{bmatrix}0 & 0 & 0\\0 & 1 & 0\\0 & 0 & 0\end{bmatrix} }[/math]; second plane = [math]\displaystyle{ \begin{bmatrix}0 & 1 & 0\\1 & -6 & 1\\0 & 1 & 0\end{bmatrix} }[/math]; third plane = [math]\displaystyle{ \begin{bmatrix}0 & 0 & 0\\0 & 1 & 0\\0 & 0 & 0\end{bmatrix} }[/math].
and using 27-point stencil by:[7]
first plane = [math]\displaystyle{ \frac{1}{26}\begin{bmatrix}2 & 3 & 2\\3 & 6 & 3\\2 & 3 & 2\end{bmatrix} }[/math]; second plane = [math]\displaystyle{ \frac{1}{26}\begin{bmatrix}3 & 6 & 3\\6 & -88 & 6\\3 & 6 & 3\end{bmatrix} }[/math]; third plane = [math]\displaystyle{ \frac{1}{26}\begin{bmatrix}2 & 3 & 2\\3 & 6 & 3\\2 & 3 & 2\end{bmatrix} }[/math].
nD filter: For the element [math]\displaystyle{ a_{x_1, x_2, \dots , x_n} }[/math] of the kernel [math]\displaystyle{ \mathbf{D}^2_{x_1, x_2, \dots , x_n}, }[/math]
[math]\displaystyle{ a_{x_1, x_2, \dots , x_n} = \left\{\begin{array}{ll} -2n & \text{if } s = n, \\ 1 & \text{if } s = n - 1, \\ 0 & \text{otherwise,} \end{array}\right. }[/math]
where xi is the position (either −1, 0 or 1) of the element in the kernel in the i-th direction, and s is the number of directions i for which xi = 0.

Note that the nD version, which is based on the graph generalization of the Laplacian, assumes all neighbors to be at an equal distance, and hence leads to the following 2D filter with diagonals included, rather than the version above:

2D filter: [math]\displaystyle{ \mathbf{D}^2_{xy}=\begin{bmatrix}1 & 1 & 1\\1 & -8 & 1\\1 & 1 & 1\end{bmatrix}. }[/math]

These kernels are deduced by using discrete differential quotients.

It can be shown[8][9] that the following discrete approximation of the two-dimensional Laplacian operator as a convex combination of difference operators

[math]\displaystyle{ \nabla^2_{\gamma}= (1 - \gamma) \nabla^2_{5} + \gamma \nabla ^2_{\times} = (1 - \gamma) \begin{bmatrix}0 & 1 & 0\\1 & -4 & 1\\0 & 1 & 0\end{bmatrix} + \gamma \begin{bmatrix}1/2 & 0 & 1/2\\0 & -2 & 0\\1/2 & 0 & 1/2\end{bmatrix} }[/math]

for γ ∈ [0, 1] is compatible with discrete scale-space properties, where specifically the value γ = 1/3 gives the best approximation of rotational symmetry.[8][9][10] Regarding three-dimensional signals, it is shown[9] that the Laplacian operator can be approximated by the two-parameter family of difference operators

[math]\displaystyle{ \nabla^2_{\gamma_1,\gamma_2} = (1 - \gamma_1 - \gamma_2) \, \nabla_7^2 + \gamma_1 \, \nabla_{+^3}^2 + \gamma_2 \, \nabla_{\times^3}^2 ), }[/math]

where

[math]\displaystyle{ (\nabla_7^2 f)_{0, 0, 0} = f_{-1, 0, 0} + f_{+1, 0, 0} + f_{0, -1, 0} + f_{0, +1, 0} + f_{0, 0, -1} + f_{0, 0, +1} - 6 f_{0, 0, 0}, }[/math]
[math]\displaystyle{ (\nabla_{+^3}^2 f)_{0, 0, 0} = \frac{1}{4} (f_{-1, -1, 0} + f_{-1, +1, 0} + f_{+1, -1, 0} + f_{+1, +1, 0} + f_{-1, 0, -1} + f_{-1, 0, +1} + f_{+1, 0, -1} + f_{+1, 0, +1} + f_{0, -1, -1} + f_{0, -1, +1} + f_{0, +1, -1} + f_{0, +1, +1} - 12 f_{0, 0, 0}), }[/math]
[math]\displaystyle{ (\nabla_{\times^3}^2 f)_{0, 0, 0} = \frac{1}{4} (f_{-1, -1, -1} + f_{-1, -1, +1} + f_{-1, +1, -1} + f_{-1, +1, +1} + f_{+1, -1, -1} + f_{+1, -1, +1} + f_{+1, +1, -1} + f_{+1, +1, +1} - 8 f_{0, 0, 0}). }[/math]

Implementation via continuous reconstruction

A discrete signal, comprising images, can be viewed as a discrete representation of a continuous function [math]\displaystyle{ f(\bar r) }[/math], where the coordinate vector [math]\displaystyle{ \bar r \in R^n }[/math] and the value domain is real [math]\displaystyle{ f\in R }[/math]. Derivation operation is therefore directly applicable to the continuous function, [math]\displaystyle{ f }[/math]. In particular any discrete image, with reasonable presumptions on the discretization process, e.g. assuming band limited functions, or wavelets expandable functions, etc. can be reconstructed by means of well-behaving interpolation functions underlying the reconstruction formulation,[11]

[math]\displaystyle{ f(\bar r)=\sum_{k\in K}f_k \mu_k(\bar r) }[/math]

where [math]\displaystyle{ f_k\in R }[/math] are discrete representations of [math]\displaystyle{ f }[/math] on grid [math]\displaystyle{ K }[/math] and [math]\displaystyle{ \mu_k }[/math] are interpolation functions specific to the grid [math]\displaystyle{ K }[/math]. On a uniform grid, such as images, and for bandlimited functions, interpolation functions are shift invariant amounting to [math]\displaystyle{ \mu_k(\bar r)= \mu(\bar r-\bar r_k) }[/math] with [math]\displaystyle{ \mu }[/math] being an appropriately dilated sinc function defined in [math]\displaystyle{ n }[/math]-dimensions i.e. [math]\displaystyle{ \bar r=(x_1,x_2...x_n)^T }[/math]. Other approximations of [math]\displaystyle{ \mu }[/math] on uniform grids, are appropriately dilated Gaussian functions in [math]\displaystyle{ n }[/math]-dimensions. Accordingly, the discrete Laplacian becomes a discrete version of the Laplacian of the continuous [math]\displaystyle{ f(\bar r) }[/math]

[math]\displaystyle{ \nabla^2 f(\bar r_k)= \sum_{k'\in K}f_{k'} (\nabla^2 \mu(\bar r-\bar r_{k'}))|_{\bar r= \bar r_k} }[/math]

which in turn is a convolution with the Laplacian of the interpolation function on the uniform (image) grid [math]\displaystyle{ K }[/math]. An advantage of using Gaussians as interpolation functions is that they yield linear operators, including Laplacians, that are free from rotational artifacts of the coordinate frame in which [math]\displaystyle{ f }[/math] is represented via [math]\displaystyle{ f_k }[/math], in [math]\displaystyle{ n }[/math]-dimensions, and are frequency aware by definition. A linear operator has not only a limited range in the [math]\displaystyle{ \bar r }[/math] domain but also an effective range in the frequency domain (alternatively Gaussian scale space) which can be controlled explicitly via the variance of the Gaussian in a principled manner. The resulting filtering can be implemented by separable filters and decimation (signal processing)/pyramid (image processing) representations for further computational efficiency in [math]\displaystyle{ n }[/math]-dimensions. In other words, the discrete Laplacian filter of any size can be generated conveniently as the sampled Laplacian of Gaussian with spatial size befitting the needs of a particular application as controlled by its variance. Monomials which are non-linear operators can also be implemented using a similar reconstruction and approximation approach provided that the signal is sufficiently over-sampled. Thereby, such non-linear operators e.g. Structure Tensor, and Generalized Structure Tensor which are used in pattern recognition for their total least-square optimality in orientation estimation, can be realized.

Spectrum

The spectrum of the discrete Laplacian on an infinite grid is of key interest; since it is a self-adjoint operator, it has a real spectrum. For the convention [math]\displaystyle{ \Delta = I - M }[/math] on [math]\displaystyle{ Z }[/math], the spectrum lies within [math]\displaystyle{ [0,2] }[/math] (as the averaging operator has spectral values in [math]\displaystyle{ [-1,1] }[/math]). This may also be seen by applying the Fourier transform. Note that the discrete Laplacian on an infinite grid has purely absolutely continuous spectrum, and therefore, no eigenvalues or eigenfunctions.

Theorems

If the graph is an infinite square lattice grid, then this definition of the Laplacian can be shown to correspond to the continuous Laplacian in the limit of an infinitely fine grid. Thus, for example, on a one-dimensional grid we have

[math]\displaystyle{ \frac{\partial^2F}{\partial x^2} = \lim_{\epsilon \rightarrow 0} \frac{[F(x+\epsilon)-F(x)]-[F(x)-F(x-\epsilon)]}{\epsilon^2}. }[/math]

This definition of the Laplacian is commonly used in numerical analysis and in image processing. In image processing, it is considered to be a type of digital filter, more specifically an edge filter, called the Laplace filter.

Discrete heat equation

Suppose [math]\displaystyle{ \phi }[/math] describes a temperature distribution across a graph, where [math]\displaystyle{ \phi_i }[/math] is the temperature at vertex [math]\displaystyle{ i }[/math]. According to Newton's law of cooling, the heat transferred from node [math]\displaystyle{ i }[/math] to node [math]\displaystyle{ j }[/math] is proportional to [math]\displaystyle{ \phi_i - \phi_j }[/math] if nodes [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math] are connected (if they are not connected, no heat is transferred). Then, for thermal conductivity [math]\displaystyle{ k }[/math],

[math]\displaystyle{ \begin{align} \frac{d \phi_i}{d t} &= -k \sum_j A_{ij} \left(\phi_i - \phi_j \right) \\ &= -k \left(\phi_i \sum_j A_{ij} - \sum_j A_{ij} \phi_j \right) \\ &= -k \left(\phi_i \ \deg(v_i) - \sum_j A_{ij} \phi_j \right) \\ &= -k \sum_j \left(\delta_{ij} \ \deg(v_i) - A_{ij} \right) \phi_j \\ &= -k \sum_j \left(L_{ij} \right) \phi_j. \end{align} }[/math]

In matrix-vector notation,

[math]\displaystyle{ \begin{align} \frac{d\phi}{dt} &= -k(D - A)\phi \\ &= -kL \phi, \end{align} }[/math]

which gives

[math]\displaystyle{ \frac{d \phi}{d t} + kL\phi = 0. }[/math]

Notice that this equation takes the same form as the heat equation, where the matrix −L is replacing the Laplacian operator [math]\displaystyle{ \nabla^2 }[/math]; hence, the "graph Laplacian".

To find a solution to this differential equation, apply standard techniques for solving a first-order matrix differential equation. That is, write [math]\displaystyle{ \phi }[/math] as a linear combination of eigenvectors [math]\displaystyle{ \mathbf{v}_i }[/math] of L (so that [math]\displaystyle{ L\mathbf{v}_i = \lambda_i \mathbf{v}_i }[/math]) with time-dependent coefficients, [math]\displaystyle{ \phi(t) = \sum_i c_i(t) \mathbf{v}_i. }[/math]

Plugging into the original expression (because L is a symmetric matrix, its unit-norm eigenvectors [math]\displaystyle{ \mathbf{v}_i }[/math] are orthogonal):

[math]\displaystyle{ \begin{align} 0 ={} &\frac{d\left(\sum_i c_i(t) \mathbf{v}_i\right)}{dt} + kL\left(\sum_i c_i(t) \mathbf{v}_i\right) \\ {}={} &\sum_i \left[\frac{dc_i(t)}{dt} \mathbf{v}_i + k c_i(t) L \mathbf{v}_i\right] \\ {}={} &\sum_i \left[\frac{dc_i(t)}{dt} \mathbf{v}_i + k c_i(t) \lambda_i \mathbf{v}_i\right] \\ \Rightarrow 0 ={} &\frac{dc_i(t)}{dt} + k \lambda_i c_i(t), \\ \end{align} }[/math]

whose solution is

[math]\displaystyle{ c_i(t) = c_i(0) e^{-k \lambda_i t}. }[/math]

As shown before, the eigenvalues [math]\displaystyle{ \lambda_i }[/math] of L are non-negative, showing that the solution to the diffusion equation approaches an equilibrium, because it only exponentially decays or remains constant. This also shows that given [math]\displaystyle{ \lambda_i }[/math] and the initial condition [math]\displaystyle{ c_i(0) }[/math], the solution at any time t can be found.[12]

To find [math]\displaystyle{ c_i(0) }[/math] for each [math]\displaystyle{ i }[/math] in terms of the overall initial condition [math]\displaystyle{ \phi(0) }[/math], simply project [math]\displaystyle{ \phi(0) }[/math] onto the unit-norm eigenvectors [math]\displaystyle{ \mathbf{v}_i }[/math];

[math]\displaystyle{ c_i(0) = \left\langle \phi(0), \mathbf{v}_i \right\rangle }[/math].

This approach has been applied to quantitative heat transfer modelling on unstructured grids.[13] [14]

In the case of undirected graphs, this works because [math]\displaystyle{ L }[/math] is symmetric, and by the spectral theorem, its eigenvectors are all orthogonal. So the projection onto the eigenvectors of [math]\displaystyle{ L }[/math] is simply an orthogonal coordinate transformation of the initial condition to a set of coordinates which decay exponentially and independently of each other.

Equilibrium behavior

To understand [math]\displaystyle{ \lim_{t \to \infty}\phi(t) }[/math], the only terms [math]\displaystyle{ c_i(t) = c_i(0) e^{-k \lambda_i t} }[/math] that remain are those where [math]\displaystyle{ \lambda_i = 0 }[/math], since

[math]\displaystyle{ \lim_{t\to\infty} e^{-k \lambda_i t} = \begin{cases} 0, & \text{if} & \lambda_i \gt 0 \\ 1, & \text{if} & \lambda_i = 0 \end{cases} }[/math]

In other words, the equilibrium state of the system is determined completely by the kernel of [math]\displaystyle{ L }[/math].

Since by definition, [math]\displaystyle{ \sum_{j}L_{ij} = 0 }[/math], the vector [math]\displaystyle{ \mathbf{v}^1 }[/math] of all ones is in the kernel. If there are [math]\displaystyle{ k }[/math] disjoint connected components in the graph, then this vector of all ones can be split into the sum of [math]\displaystyle{ k }[/math] independent [math]\displaystyle{ \lambda = 0 }[/math] eigenvectors of ones and zeros, where each connected component corresponds to an eigenvector with ones at the elements in the connected component and zeros elsewhere.

The consequence of this is that for a given initial condition [math]\displaystyle{ c(0) }[/math] for a graph with [math]\displaystyle{ N }[/math] vertices

[math]\displaystyle{ \lim_{t\to\infty}\phi(t) = \left\langle c(0), \mathbf{v^1} \right\rangle \mathbf{v^1} }[/math]

where

[math]\displaystyle{ \mathbf{v^1} = \frac{1}{\sqrt{N}} [1, 1, \ldots, 1] }[/math]

For each element [math]\displaystyle{ \phi_j }[/math] of [math]\displaystyle{ \phi }[/math], i.e. for each vertex [math]\displaystyle{ j }[/math] in the graph, it can be rewritten as

[math]\displaystyle{ \lim_{t\to\infty}\phi_j(t) = \frac{1}{N} \sum_{i = 1}^N c_i(0) }[/math].

In other words, at steady state, the value of [math]\displaystyle{ \phi }[/math] converges to the same value at each of the vertices of the graph, which is the average of the initial values at all of the vertices. Since this is the solution to the heat diffusion equation, this makes perfect sense intuitively. We expect that neighboring elements in the graph will exchange energy until that energy is spread out evenly throughout all of the elements that are connected to each other.

Example of the operator on a grid

This GIF shows the progression of diffusion, as solved by the graph laplacian technique. A graph is constructed over a grid, where each pixel in the graph is connected to its 8 bordering pixels. Values in the image then diffuse smoothly to their neighbors over time via these connections. This particular image starts off with three strong point values which spill over to their neighbors slowly. The whole system eventually settles out to the same value at equilibrium.

This section shows an example of a function [math]\displaystyle{ \phi }[/math] diffusing over time through a graph. The graph in this example is constructed on a 2D discrete grid, with points on the grid connected to their eight neighbors. Three initial points are specified to have a positive value, while the rest of the values in the grid are zero. Over time, the exponential decay acts to distribute the values at these points evenly throughout the entire grid.

The complete Matlab source code that was used to generate this animation is provided below. It shows the process of specifying initial conditions, projecting these initial conditions onto the eigenvalues of the Laplacian Matrix, and simulating the exponential decay of these projected initial conditions.

N = 20; % The number of pixels along a dimension of the image
A = zeros(N, N); % The image
Adj = zeros(N * N, N * N); % The adjacency matrix

% Use 8 neighbors, and fill in the adjacency matrix
dx = [- 1, 0, 1, - 1, 1, - 1, 0, 1];
dy = [- 1, - 1, - 1, 0, 0, 1, 1, 1];
for x = 1:N
    for y = 1:N
        index = (x - 1) * N + y;
        for ne = 1:length(dx)
            newx = x + dx(ne);
            newy = y + dy(ne);
            if newx > 0 && newx <= N && newy > 0 && newy <= N
                index2 = (newx - 1) * N + newy;
                Adj(index, index2) = 1;
            end
        end
    end
end

% BELOW IS THE KEY CODE THAT COMPUTES THE SOLUTION TO THE DIFFERENTIAL EQUATION
Deg = diag(sum(Adj, 2)); % Compute the degree matrix
L = Deg - Adj; % Compute the laplacian matrix in terms of the degree and adjacency matrices
[V, D] = eig(L); % Compute the eigenvalues/vectors of the laplacian matrix
D = diag(D);

% Initial condition (place a few large positive values around and
% make everything else zero)
C0 = zeros(N, N);
C0(2:5, 2:5) = 5;
C0(10:15, 10:15) = 10;
C0(2:5, 8:13) = 7;
C0 = C0(:);

C0V = V'*C0; % Transform the initial condition into the coordinate system
% of the eigenvectors
for t = 0:0.05:5
    % Loop through times and decay each initial component
    Phi = C0V .* exp(- D * t); % Exponential decay for each component
    Phi = V * Phi; % Transform from eigenvector coordinate system to original coordinate system
    Phi = reshape(Phi, N, N);
    % Display the results and write to GIF file
    imagesc(Phi);
    caxis([0, 10]);
     title(sprintf('Diffusion t = %3f', t));
    frame = getframe(1);
    im = frame2im(frame);
    [imind, cm] = rgb2ind(im, 256);
    if t == 0
        imwrite(imind, cm, 'out.gif', 'gif', 'Loopcount', inf, 'DelayTime', 0.1);
    else
        imwrite(imind, cm, 'out.gif', 'gif', 'WriteMode', 'append', 'DelayTime', 0.1);
    end
end

Discrete Schrödinger operator

Let [math]\displaystyle{ P\colon V\rightarrow R }[/math] be a potential function defined on the graph. Note that P can be considered to be a multiplicative operator acting diagonally on [math]\displaystyle{ \phi }[/math]

[math]\displaystyle{ (P\phi)(v)=P(v)\phi(v). }[/math]

Then [math]\displaystyle{ H=\Delta+P }[/math] is the discrete Schrödinger operator, an analog of the continuous Schrödinger operator.

If the number of edges meeting at a vertex is uniformly bounded, and the potential is bounded, then H is bounded and self-adjoint.

The spectral properties of this Hamiltonian can be studied with Stone's theorem; this is a consequence of the duality between posets and Boolean algebras.

On regular lattices, the operator typically has both traveling-wave as well as Anderson localization solutions, depending on whether the potential is periodic or random.

The Green's function of the discrete Schrödinger operator is given in the resolvent formalism by

[math]\displaystyle{ G(v,w;\lambda)=\left\langle\delta_v\left| \frac{1}{H-\lambda}\right| \delta_w\right\rangle }[/math]

where [math]\displaystyle{ \delta_w }[/math] is understood to be the Kronecker delta function on the graph: [math]\displaystyle{ \delta_w(v)=\delta_{wv} }[/math]; that is, it equals 1 if v=w and 0 otherwise.

For fixed [math]\displaystyle{ w\in V }[/math] and [math]\displaystyle{ \lambda }[/math] a complex number, the Green's function considered to be a function of v is the unique solution to

[math]\displaystyle{ (H-\lambda)G(v,w;\lambda)=\delta_w(v). }[/math]

ADE classification

Certain equations involving the discrete Laplacian only have solutions on the simply-laced Dynkin diagrams (all edges multiplicity 1), and are an example of the ADE classification. Specifically, the only positive solutions to the homogeneous equation:

[math]\displaystyle{ \Delta \phi = \phi, }[/math]

in words,

"Twice any label is the sum of the labels on adjacent vertices,"

are on the extended (affine) ADE Dynkin diagrams, of which there are 2 infinite families (A and D) and 3 exceptions (E). The resulting numbering is unique up to scale, and if the smallest value is set at 1, the other numbers are integers, ranging up to 6.

The ordinary ADE graphs are the only graphs that admit a positive labeling with the following property:

Twice any label minus two is the sum of the labels on adjacent vertices.

In terms of the Laplacian, the positive solutions to the inhomogeneous equation:

[math]\displaystyle{ \Delta \phi = \phi - 2. }[/math]

The resulting numbering is unique (scale is specified by the "2"), and consists of integers; for E8 they range from 58 to 270, and have been observed as early as 1968.[15]

See also

References

  1. Leventhal, Daniel (Autumn 2011). "Image processing". https://courses.cs.washington.edu/courses/cse457/11au/lectures/image-processing.pdf. 
  2. Crane, K.; de Goes, F.; Desbrun, M.; Schröder, P. (2013). "Digital geometry processing with discrete exterior calculus". SIGGRAPH '13. 7. pp. 1–126. doi:10.1145/2504435.2504442. http://doi.acm.org/10.1145/2504435.2504442. 
  3. Reuter, M.; Biasotti, S.; Giorgi, D.; Patane, G.; Spagnuolo, M. (2009). "Discrete Laplace-Beltrami operators for shape analysis and segmentation". Computers & Graphics 33 (3): 381–390df. doi:10.1016/j.cag.2009.03.005. 
  4. Forsyth, D. A.; Ponce, J. (2003). "Computer Vision". Computers & Graphics 33 (3): 381–390. doi:10.1016/j.cag.2009.03.005. 
  5. Matthys, Don (Feb 14, 2001). "LoG Filter". https://academic.mu.edu/phys/matthysd/web226/Lab02.htm. 
  6. Provatas, Nikolas; Elder, Ken (2010-10-13). Phase-Field Methods in Materials Science and Engineering. Weinheim, Germany: Wiley-VCH Verlag GmbH & Co. KGaA. p. 219. doi:10.1002/9783527631520. ISBN 978-3-527-63152-0. http://www.physics.mcgill.ca/~provatas/papers/Phase_Field_Methods_text.pdf. 
  7. O'Reilly, H.; Beck, Jeffrey M. (2006). "A Family of Large-Stencil Discrete Laplacian Approximations in Three Dimensions". International Journal for Numerical Methods in Engineering: 1–16. http://psych.colorado.edu/~oreilly/papers/OReillyBeckIP_lapl.pdf. 
  8. 8.0 8.1 Lindeberg, T., "Scale-space for discrete signals", PAMI(12), No. 3, March 1990, pp. 234–254.
  9. 9.0 9.1 9.2 Lindeberg, T., Scale-Space Theory in Computer Vision, Kluwer Academic Publishers, 1994, ISBN 0-7923-9418-6.
  10. Patra, Michael; Karttunen, Mikko (2006). "Stencils with isotropic discretization error for differential operators". Numerical Methods for Partial Differential Equations 22 (4): 936–953. doi:10.1002/num.20129. ISSN 0749-159X. 
  11. Bigun, J. (2006). Vision with Direction. Springer. doi:10.1007/b138918. ISBN 978-3-540-27322-6. 
  12. Newman, Mark (2010). Networks: An Introduction. Oxford University Press. ISBN 978-0199206650. 
  13. Yavari, R.; Cole, K. D.; Rao, P. K. (2020). "Computational heat transfer with spectral graph theory: Quantitative verification". International Journal of Thermal Sciences 153: 106383. doi:10.1016/j.ijthermalsci.2020.106383. 
  14. Cole, K. D.; Riensche, A.; Rao, P. K. (2022). "Discrete Green's functions and spectral graph theory for computationally efficient thermal modeling". International Journal of Heat and Mass Transfer 183: 122112. doi:10.1016/j.ijheatmasstransfer.2021.122112. 
  15. Bourbaki, Nicolas (2002), Groupes et algebres de Lie: Chapters 4–6, Elements of Mathematics, Springer, ISBN 978-3-540-69171-6, https://books.google.com/books?id=FU5WeeFoDY4C&pg=PR7 

External links