Generalized linear array model

From HandWiki
Revision as of 21:09, 8 February 2024 by MainAI (talk | contribs) (simplify)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In statistics, the generalized linear array model (GLAM) is used for analyzing data sets with array structures. It based on the generalized linear model with the design matrix written as a Kronecker product.

Overview

The generalized linear array model or GLAM was introduced in 2006.[1] Such models provide a structure and a computational procedure for fitting generalized linear models or GLMs whose model matrix can be written as a Kronecker product and whose data can be written as an array. In a large GLM, the GLAM approach gives very substantial savings in both storage and computational time over the usual GLM algorithm.

Suppose that the data [math]\displaystyle{ \mathbf Y }[/math] is arranged in a [math]\displaystyle{ d }[/math]-dimensional array with size [math]\displaystyle{ n_1\times n_2\times\dots\times n_d }[/math]; thus, the corresponding data vector [math]\displaystyle{ \mathbf y = \operatorname{vec}(\mathbf Y) }[/math] has size [math]\displaystyle{ n_1n_2n_3\cdots n_d }[/math]. Suppose also that the design matrix is of the form

[math]\displaystyle{ \mathbf X = \mathbf X_d\otimes\mathbf X_{d-1}\otimes\dots\otimes\mathbf X_1. }[/math]

The standard analysis of a GLM with data vector [math]\displaystyle{ \mathbf y }[/math] and design matrix [math]\displaystyle{ \mathbf X }[/math] proceeds by repeated evaluation of the scoring algorithm

[math]\displaystyle{ \mathbf X'\tilde{\mathbf W}_\delta\mathbf X\hat{\boldsymbol\theta} = \mathbf X'\tilde{\mathbf W}_\delta\tilde{\boldsymbol\theta} , }[/math]

where [math]\displaystyle{ \tilde{\boldsymbol\theta} }[/math] represents the approximate solution of [math]\displaystyle{ \boldsymbol\theta }[/math], and [math]\displaystyle{ \hat{\boldsymbol\theta} }[/math] is the improved value of it; [math]\displaystyle{ \mathbf W_\delta }[/math] is the diagonal weight matrix with elements

[math]\displaystyle{ w_{ii}^{-1} = \left(\frac{\partial\eta_i}{\partial\mu_i}\right)^2\mathrm{var}(y_i), }[/math]

and

[math]\displaystyle{ \mathbf z = \boldsymbol\eta + \mathbf W_\delta^{-1}(\mathbf y - \boldsymbol\mu) }[/math]

is the working variable.

Computationally, GLAM provides array algorithms to calculate the linear predictor,

[math]\displaystyle{ \boldsymbol\eta = \mathbf X \boldsymbol\theta }[/math]

and the weighted inner product

[math]\displaystyle{ \mathbf X'\tilde{\mathbf W}_\delta\mathbf X }[/math]

without evaluation of the model matrix [math]\displaystyle{ \mathbf X . }[/math]

Example

In 2 dimensions, let [math]\displaystyle{ \mathbf X = \mathbf X_2\otimes\mathbf X_1 }[/math], then the linear predictor is written [math]\displaystyle{ \mathbf X_1 \boldsymbol\Theta \mathbf X_2' }[/math] where [math]\displaystyle{ \boldsymbol\Theta }[/math] is the matrix of coefficients; the weighted inner product is obtained from [math]\displaystyle{ G(\mathbf X_1)' \mathbf W G(\mathbf X_2) }[/math] and [math]\displaystyle{ \mathbf W }[/math] is the matrix of weights; here [math]\displaystyle{ G(\mathbf M) }[/math] is the row tensor function of the [math]\displaystyle{ r \times c }[/math] matrix [math]\displaystyle{ \mathbf M }[/math] given by[1]

[math]\displaystyle{ G(\mathbf M) = (\mathbf M \otimes \mathbf 1') \circ (\mathbf 1' \otimes \mathbf M) }[/math]

where [math]\displaystyle{ \circ }[/math] means element by element multiplication and [math]\displaystyle{ \mathbf 1 }[/math] is a vector of 1's of length [math]\displaystyle{ c }[/math].

On the other hand, the row tensor function [math]\displaystyle{ G(\mathbf M) }[/math] of the [math]\displaystyle{ r \times c }[/math] matrix [math]\displaystyle{ \mathbf M }[/math] is the example of Face-splitting product of matrices, which was proposed by Vadym Slyusar in 1996:[2][3][4][5]

[math]\displaystyle{ \mathbf{M} \bull \mathbf{M} = \left(\mathbf {M} \otimes \mathbf {1}^\textsf{T}\right) \circ \left(\mathbf {1}^\textsf{T} \otimes \mathbf {M}\right) , }[/math]

where [math]\displaystyle{ \bull }[/math] means Face-splitting product.

These low storage high speed formulae extend to [math]\displaystyle{ d }[/math]-dimensions.

Applications

GLAM is designed to be used in [math]\displaystyle{ d }[/math]-dimensional smoothing problems where the data are arranged in an array and the smoothing matrix is constructed as a Kronecker product of [math]\displaystyle{ d }[/math] one-dimensional smoothing matrices.

References

  1. 1.0 1.1 Currie, I. D.; Durban, M.; Eilers, P. H. C. (2006). "Generalized linear array models with applications to multidimensional smoothing". Journal of the Royal Statistical Society 68 (2): 259–280. doi:10.1111/j.1467-9868.2006.00543.x. 
  2. Slyusar, V. I. (December 27, 1996). "End products in matrices in radar applications.". Radioelectronics and Communications Systems 41 (3): 50–53. http://slyusar.kiev.ua/en/IZV_1998_3.pdf. 
  3. Slyusar, V. I. (1997-05-20). "Analytical model of the digital antenna array on a basis of face-splitting matrix products.". Proc. ICATT-97, Kyiv: 108–109. http://slyusar.kiev.ua/ICATT97.pdf. 
  4. Slyusar, V. I. (1997-09-15). "New operations of matrices product for applications of radars". Proc. Direct and Inverse Problems of Electromagnetic and Acoustic Wave Theory (DIPED-97), Lviv.: 73–74. http://slyusar.kiev.ua/DIPED_1997.pdf. 
  5. Slyusar, V. I. (March 13, 1998). "A Family of Face Products of Matrices and its Properties". Cybernetics and Systems Analysis C/C of Kibernetika I Sistemnyi Analiz. 1999. 35 (3): 379–384. doi:10.1007/BF02733426. http://slyusar.kiev.ua/FACE.pdf.