Quadratic eigenvalue problem

From HandWiki

In mathematics, the quadratic eigenvalue problem[1] (QEP), is to find scalar eigenvalues [math]\displaystyle{ \lambda }[/math], left eigenvectors [math]\displaystyle{ y }[/math] and right eigenvectors [math]\displaystyle{ x }[/math] such that

[math]\displaystyle{ Q(\lambda)x = 0 ~ \text{ and } ~ y^\ast Q(\lambda) = 0, }[/math]

where [math]\displaystyle{ Q(\lambda)=\lambda^2 M + \lambda C + K }[/math], with matrix coefficients [math]\displaystyle{ M, \, C, K \in \mathbb{C}^{n \times n} }[/math] and we require that [math]\displaystyle{ M\,\neq 0 }[/math], (so that we have a nonzero leading coefficient). There are [math]\displaystyle{ 2n }[/math] eigenvalues that may be infinite or finite, and possibly zero. This is a special case of a nonlinear eigenproblem. [math]\displaystyle{ Q(\lambda) }[/math] is also known as a quadratic polynomial matrix.

Spectral theory

A QEP is said to be regular if [math]\displaystyle{ \text{det} (Q(\lambda)) \not \equiv 0 }[/math] identically. The coefficient of the [math]\displaystyle{ \lambda^{2n} }[/math] term in [math]\displaystyle{ \text{det}(Q(\lambda)) }[/math] is [math]\displaystyle{ \text{det}(M) }[/math], implying that the QEP is regular if [math]\displaystyle{ M }[/math] is nonsingular.

Eigenvalues at infinity and eigenvalues at 0 may be exchanged by considering the reversed polynomial, [math]\displaystyle{ \lambda^2 Q(\lambda^{-1}) = \lambda^2 K + \lambda C + M }[/math]. As there are [math]\displaystyle{ 2n }[/math] eigenvectors in a [math]\displaystyle{ n }[/math] dimensional space, the eigenvectors cannot be orthogonal. It is possible to have the same eigenvector attached to different eigenvalues.

Applications

Systems of differential equations

Quadratic eigenvalue problems arise naturally in the solution of systems of second order linear differential equations without forcing:

[math]\displaystyle{ M q''(t) +C q'(t) + K q(t) = 0 }[/math]

Where [math]\displaystyle{ q(t) \in \mathbb{R}^n }[/math], and [math]\displaystyle{ M, C, K \in \mathbb{R}^{n\times n} }[/math]. If all quadratic eigenvalues of [math]\displaystyle{ Q(\lambda) = \lambda^2 M + \lambda C + K }[/math] are distinct, then the solution can be written in terms of the quadratic eigenvalues and right quadratic eigenvectors as

[math]\displaystyle{ q(t) = \sum_{j=1}^{2n} \alpha_j x_j e^{\lambda_j t} = X e^{\Lambda t} \alpha }[/math]

Where [math]\displaystyle{ \Lambda = \text{Diag}([\lambda_1, \ldots, \lambda_{2n}]) \in \mathbb{R}^{2n \times 2n} }[/math] are the quadratic eigenvalues, [math]\displaystyle{ X = [x_1, \ldots, x_{2n}] \in \mathbb{R}^{n \times 2n} }[/math] are the [math]\displaystyle{ 2n }[/math] right quadratic eigenvectors, and [math]\displaystyle{ \alpha = [\alpha_1, \cdots, \alpha_{2n}]^\top \in \mathbb{R}^{2n} }[/math] is a parameter vector determined from the initial conditions on [math]\displaystyle{ q }[/math] and [math]\displaystyle{ q' }[/math]. Stability theory for linear systems can now be applied, as the behavior of a solution depends explicitly on the (quadratic) eigenvalues.

Finite element methods

A QEP can result in part of the dynamic analysis of structures discretized by the finite element method. In this case the quadratic, [math]\displaystyle{ Q(\lambda) }[/math] has the form [math]\displaystyle{ Q(\lambda)=\lambda^2 M + \lambda C + K }[/math], where [math]\displaystyle{ M }[/math] is the mass matrix, [math]\displaystyle{ C }[/math] is the damping matrix and [math]\displaystyle{ K }[/math] is the stiffness matrix. Other applications include vibro-acoustics and fluid dynamics.

Methods of solution

Direct methods for solving the standard or generalized eigenvalue problems [math]\displaystyle{ Ax = \lambda x }[/math] and [math]\displaystyle{ Ax = \lambda B x }[/math] are based on transforming the problem to Schur or Generalized Schur form. However, there is no analogous form for quadratic matrix polynomials. One approach is to transform the quadratic matrix polynomial to a linear matrix pencil ([math]\displaystyle{ A-\lambda B }[/math]), and solve a generalized eigenvalue problem. Once eigenvalues and eigenvectors of the linear problem have been determined, eigenvectors and eigenvalues of the quadratic can be determined.

The most common linearization is the first companion linearization

[math]\displaystyle{ L1(\lambda) = \begin{bmatrix} 0 & N \\ -K & -C \end{bmatrix} - \lambda\begin{bmatrix} N & 0 \\ 0 & M \end{bmatrix}, }[/math]

with corresponding eigenvector

[math]\displaystyle{ z = \begin{bmatrix} x \\ \lambda x \end{bmatrix}. }[/math]

For convenience, one often takes [math]\displaystyle{ N }[/math] to be the [math]\displaystyle{ n\times n }[/math] identity matrix. We solve [math]\displaystyle{ L(\lambda) z = 0 }[/math] for [math]\displaystyle{ \lambda }[/math] and [math]\displaystyle{ z }[/math], for example by computing the Generalized Schur form. We can then take the first [math]\displaystyle{ n }[/math] components of [math]\displaystyle{ z }[/math] as the eigenvector [math]\displaystyle{ x }[/math] of the original quadratic [math]\displaystyle{ Q(\lambda) }[/math].

Another common linearization is given by

[math]\displaystyle{ L2(\lambda)= \begin{bmatrix} -K & 0 \\ 0 & N \end{bmatrix} - \lambda\begin{bmatrix} C & M \\ N & 0 \end{bmatrix}. }[/math]

In the case when either [math]\displaystyle{ A }[/math] or [math]\displaystyle{ B }[/math] is a Hamiltonian matrix and the other is a skew-Hamiltonian matrix, the following linearizations can be used.

[math]\displaystyle{ L3(\lambda)= \begin{bmatrix} K & 0 \\ C & K \end{bmatrix} - \lambda\begin{bmatrix} 0 & K \\ -M & 0 \end{bmatrix}. }[/math]
[math]\displaystyle{ L4(\lambda)= \begin{bmatrix} 0 & -K \\ M & 0 \end{bmatrix} - \lambda\begin{bmatrix} M & C \\ 0 & M \end{bmatrix}. }[/math]

References

  1. F. Tisseur and K. Meerbergen, The quadratic eigenvalue problem, SIAM Rev., 43 (2001), pp. 235–286.