MUSIC (algorithm)
MUSIC (MUltiple SIgnal Classification) is an algorithm used for frequency estimation[1][2][3] and radio direction finding.[4]
History
In many practical signal processing problems, the objective is to estimate from measurements a set of constant parameters upon which the received signals depend. There have been several approaches to such problems including the so-called maximum likelihood (ML) method of Capon (1969) and Burg's maximum entropy (ME) method. Although often successful and widely used, these methods have certain fundamental limitations (especially bias and sensitivity in parameter estimates), largely because they use an incorrect model (e.g., AR rather than special ARMA) of the measurements.
Pisarenko (1973) was one of the first to exploit the structure of the data model, doing so in the context of estimation of parameters of complex sinusoids in additive noise using a covariance approach. Schmidt (1977), while working at Northrop Grumman and independently Bienvenu and Kopp (1979) were the first to correctly exploit the measurement model in the case of sensor arrays of arbitrary form. Schmidt, in particular, accomplished this by first deriving a complete geometric solution in the absence of noise, then cleverly extending the geometric concepts to obtain a reasonable approximate solution in the presence of noise. The resulting algorithm was called MUSIC (MUltiple SIgnal Classification) and has been widely studied.
In a detailed evaluation based on thousands of simulations, the Massachusetts Institute of Technology's Lincoln Laboratory concluded in 1998 that, among currently accepted high-resolution algorithms, MUSIC was the most promising and a leading candidate for further study and actual hardware implementation.[5] However, although the performance advantages of MUSIC are substantial, they are achieved at a cost in computation (searching over parameter space) and storage (of array calibration data).[6]
Theory
MUSIC method assumes that a signal vector, [math]\displaystyle{ \mathbf{x} }[/math], consists of [math]\displaystyle{ p }[/math] complex exponentials, whose frequencies [math]\displaystyle{ \omega }[/math] are unknown, in the presence of Gaussian white noise, [math]\displaystyle{ \mathbf{n} }[/math], as given by the linear model
- [math]\displaystyle{ \mathbf{x} = \mathbf{A} \mathbf{s} + \mathbf{n}. }[/math]
Here [math]\displaystyle{ \mathbf{A} = [\mathbf{a}(\omega_1), \cdots, \mathbf{a}(\omega_p)] }[/math] is an [math]\displaystyle{ M \times p }[/math] Vandermonde matrix of steering vectors [math]\displaystyle{ \mathbf{a}(\omega) = [1, e^{j\omega}, e^{j2\omega}, \ldots, e^{j(M-1)\omega}]^T }[/math] and [math]\displaystyle{ \mathbf{s} = [s_1, \ldots, s_p]^T }[/math] is the amplitude vector. A crucial assumption is that number of sources, [math]\displaystyle{ p }[/math], is less than the number of elements in the measurement vector, [math]\displaystyle{ M }[/math], i.e. [math]\displaystyle{ p \lt M }[/math].
The [math]\displaystyle{ M \times M }[/math] autocorrelation matrix of [math]\displaystyle{ \mathbf{x} }[/math] is then given by
- [math]\displaystyle{ \mathbf{R}_x = \mathbf{A} \mathbf{R}_s \mathbf{A}^H + \sigma^2 \mathbf{I}, }[/math]
where [math]\displaystyle{ \sigma^2 }[/math] is the noise variance, [math]\displaystyle{ \mathbf{I} }[/math] is [math]\displaystyle{ M \times M }[/math] identity matrix, and [math]\displaystyle{ \mathbf{R}_s }[/math] is the [math]\displaystyle{ p \times p }[/math] autocorrelation matrix of [math]\displaystyle{ \mathbf{s} }[/math].
The autocorrelation matrix [math]\displaystyle{ \mathbf{R}_x }[/math] is traditionally estimated using sample correlation matrix
- [math]\displaystyle{ \widehat{\mathbf{R}}_x = \frac{1}{N} \mathbf{X} \mathbf{X}^H }[/math]
where [math]\displaystyle{ N \gt M }[/math] is the number of vector observations and [math]\displaystyle{ \mathbf{X} = [\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_N] }[/math]. Given the estimate of [math]\displaystyle{ \mathbf{R}_x }[/math], MUSIC estimates the frequency content of the signal or autocorrelation matrix using an eigenspace method.
Since [math]\displaystyle{ \mathbf{R}_x }[/math] is a Hermitian matrix, all of its [math]\displaystyle{ M }[/math] eigenvectors [math]\displaystyle{ \{\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_M\} }[/math] are orthogonal to each other. If the eigenvalues of [math]\displaystyle{ \mathbf{R}_x }[/math] are sorted in decreasing order, the eigenvectors [math]\displaystyle{ \{\mathbf{v}_1, \ldots, \mathbf{v}_p\} }[/math] corresponding to the [math]\displaystyle{ p }[/math] largest eigenvalues (i.e. directions of largest variability) span the signal subspace [math]\displaystyle{ \mathcal{U}_S }[/math]. The remaining [math]\displaystyle{ M-p }[/math] eigenvectors correspond to eigenvalue equal to [math]\displaystyle{ \sigma^2 }[/math] and span the noise subspace [math]\displaystyle{ \mathcal{U}_N }[/math], which is orthogonal to the signal subspace, [math]\displaystyle{ \mathcal{U}_S \perp \mathcal{U}_N }[/math].
Note that for [math]\displaystyle{ M = p + 1 }[/math], MUSIC is identical to Pisarenko harmonic decomposition. The general idea behind MUSIC method is to use all the eigenvectors that span the noise subspace to improve the performance of the Pisarenko estimator.
Since any signal vector [math]\displaystyle{ \mathbf{e} }[/math] that resides in the signal subspace [math]\displaystyle{ \mathbf{e} \in \mathcal{U}_S }[/math] must be orthogonal to the noise subspace, [math]\displaystyle{ \mathbf{e} \perp \mathcal{U}_N }[/math], it must be that [math]\displaystyle{ \mathbf{e} \perp \mathbf{v}_i }[/math] for all the eigenvectors [math]\displaystyle{ \{\mathbf{v}_i \}_{i=p+1}^M }[/math] that spans the noise subspace. In order to measure the degree of orthogonality of [math]\displaystyle{ \mathbf{e} }[/math] with respect to all the [math]\displaystyle{ \mathbf{v}_i \in \mathcal{U}_N }[/math], the MUSIC algorithm defines a squared norm
- [math]\displaystyle{ d^2 = \| \mathbf{U}_N^H \mathbf{e} \|^2 = \mathbf{e}^H \mathbf{U}_N \mathbf{U}_N^H \mathbf{e} = \sum_{i=p+1}^{M} |\mathbf{e}^{H} \mathbf{v}_i|^2 }[/math]
where the matrix [math]\displaystyle{ \mathbf{U}_N = [\mathbf{v}_{p+1}, \ldots, \mathbf{v}_{M}] }[/math] is the matrix of eigenvectors that span the noise subspace [math]\displaystyle{ \mathcal{U}_N }[/math]. If [math]\displaystyle{ \mathbf{e} \in \mathcal{U}_S }[/math], then [math]\displaystyle{ d^2 = 0 }[/math] as implied by the orthogonality condition. Taking a reciprocal of the squared norm expression creates sharp peaks at the signal frequencies. The frequency estimation function for MUSIC (or the pseudo-spectrum) is
- [math]\displaystyle{ \hat P_{MU}(e^{j \omega}) = \frac{1}{\mathbf{e}^H \mathbf{U}_N \mathbf{U}_N^H \mathbf{e}} = \frac{1}{\sum_{i=p+1}^{M} |\mathbf{e}^{H} \mathbf{v}_i|^2}, }[/math]
where [math]\displaystyle{ \mathbf{v}_i }[/math] are the noise eigenvectors and
- [math]\displaystyle{ \mathbf{e} = \begin{bmatrix}1 & e^{j \omega} & e^{j 2 \omega} & \cdots & e^{j (M-1) \omega}\end{bmatrix}^T }[/math]
is the candidate steering vector. The locations of the [math]\displaystyle{ p }[/math] largest peaks of the estimation function give the frequency estimates for the [math]\displaystyle{ p }[/math] signal components
- [math]\displaystyle{ \hat{\omega} = \arg\max_\omega \; \hat P_{MU}(e^{j \omega}). }[/math]
MUSIC is a generalization of Pisarenko's method, and it reduces to Pisarenko's method when [math]\displaystyle{ M=p+1 }[/math]. In Pisarenko's method, only a single eigenvector is used to form the denominator of the frequency estimation function; and the eigenvector is interpreted as a set of autoregressive coefficients, whose zeros can be found analytically or with polynomial root finding algorithms. In contrast, MUSIC assumes that several such functions have been added together, so zeros may not be present. Instead there are local minima, which can be located by computationally searching the estimation function for peaks.
Dimension of signal space
The fundamental observation MUSIC and other subspace decomposition methods are based on is about the rank of the autocorrelation matrix [math]\displaystyle{ \mathbf{R}_x }[/math] which is related to number of signal sources [math]\displaystyle{ p }[/math] as follows.
If the sources are complex, then [math]\displaystyle{ M \gt p }[/math] and the dimension of the signal subspace [math]\displaystyle{ \mathcal{U}_S }[/math] is [math]\displaystyle{ p }[/math]. If sources are real, then [math]\displaystyle{ M \gt 2p }[/math] and dimension of the signal subspace is [math]\displaystyle{ 2p }[/math], i.e. each real sinusoid is generated by two base vectors.
This fundamental result, although often skipped in spectral analysis books, is a reason why the input signal can be distributed into [math]\displaystyle{ p }[/math] signal subspace eigenvectors spanning [math]\displaystyle{ \mathcal{U}_S }[/math] ([math]\displaystyle{ 2p }[/math] for real valued signals) and noise subspace eigenvectors spanning [math]\displaystyle{ \mathcal{U}_N }[/math]. It is based on signal embedding theory [2] [7] and can also be explained by the topological theory of manifolds.[4]
Comparison to other methods
MUSIC outperforms simple methods such as picking peaks of DFT spectra in the presence of noise, when the number of components is known in advance, because it exploits knowledge of this number to ignore the noise in its final report.
Unlike DFT, it is able to estimate frequencies with accuracy higher than one sample, because its estimation function can be evaluated for any frequency, not just those of DFT bins. This is a form of superresolution.
Its chief disadvantage is that it requires the number of components to be known in advance, so the original method cannot be used in more general cases. Methods exist for estimating the number of source components purely from statistical properties of the autocorrelation matrix. See, e.g. [8] In addition, MUSIC assumes coexistent sources to be uncorrelated, which limits its practical applications.
Recent iterative semi-parametric methods offer robust superresolution despite highly correlated sources, e.g., SAMV[9][10]
Other applications
A modified version of MUSIC, denoted as Time-Reversal MUSIC (TR-MUSIC) has been recently applied to computational time-reversal imaging.[11][12] MUSIC algorithm has also been implemented for fast detection of the DTMF frequencies (Dual-tone multi-frequency signaling) in the form of C library - libmusic[13] (including for MATLAB implementation).[14]
See also
- Spectral density estimation
- Periodogram
- Matched filter
- Welch's method
- Bartlett's method
- SAMV (algorithm)
- Radio direction finding
- Pitch detection algorithm
- High-resolution microscopy
References
- ↑ Hayes, Monson H., Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc., 1996. ISBN:0-471-59431-8.
- ↑ 2.0 2.1 Gregor, Piotr (2022). Zastosowanie algorytmu MUSIC do wykrywania DTMF [Application of MUSIC algorithm to DTMF detection] (Thesis) (in polski). Warsaw University of Technology.
- ↑ Costanzo, Sandra; Buonanno, Giovanni; Solimene, Raffaele (2022). "Super-Resolution Spectral Approach for the Accuracy Enhancement of Biomedical Resonant Microwave Sensors". IEEE Journal of Electromagnetics, RF and Microwaves in Medicine and Biology 6 (4): 539–545. doi:10.1109/JERM.2022.3210457. ISSN 2469-7249. https://ieeexplore.ieee.org/document/9913069.
- ↑ 4.0 4.1 Schmidt, R.O, "Multiple Emitter Location and Signal Parameter Estimation," IEEE Trans. Antennas Propagation, Vol. AP-34 (March 1986), pp. 276–280.
- ↑ Barabell, A. J. (1998). "Performance Comparison of Superresolution Array Processing Algorithms. Revised.". Massachusetts Inst of Tech Lexington Lincoln Lab. https://apps.dtic.mil/sti/pdfs/ADA347296.pdf.
- ↑ R. Roy and T. Kailath, "ESPRIT-estimation of signal parameters via rotational invariance techniques," in IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 37, no. 7, pp. 984–995, Jul 1989.
- ↑ Penny, W. D. (2009), Signal Processing Course, University College London, Lecture notes 1999–2000 academic year, https://www.fil.ion.ucl.ac.uk/~wpenny/course/course.html
- ↑ Fishler, Eran, and H. Vincent Poor. "Estimation of the number of sources in unbalanced arrays via information theoretic criteria." IEEE Transactions on Signal Processing 53.9 (2005): 3543–3553.
- ↑ Abeida, Habti; Zhang, Qilin; Li, Jian; Merabtine, Nadjim (2013). "Iterative Sparse Asymptotic Minimum Variance Based Approaches for Array Processing". IEEE Transactions on Signal Processing (Institute of Electrical and Electronics Engineers (IEEE)) 61 (4): 933–944. doi:10.1109/tsp.2012.2231676. ISSN 1053-587X. Bibcode: 2013ITSP...61..933A.
- ↑ Zhang, Qilin; Abeida, Habti; Xue, Ming; Rowe, William; Li, Jian (2012). "Fast implementation of sparse iterative covariance-based estimation for source localization". The Journal of the Acoustical Society of America 131 (2): 1249–1259. doi:10.1121/1.3672656. PMID 22352499. Bibcode: 2012ASAJ..131.1249Z.
- ↑ Devaney, A.J. (2005-05-01). "Time reversal imaging of obscured targets from multistatic data". IEEE Transactions on Antennas and Propagation 53 (5): 1600–1610. doi:10.1109/TAP.2005.846723. ISSN 0018-926X. Bibcode: 2005ITAP...53.1600D.
- ↑ Ciuonzo, D.; Romano, G.; Solimene, R. (2015-05-01). "Performance Analysis of Time-Reversal MUSIC". IEEE Transactions on Signal Processing 63 (10): 2650–2662. doi:10.1109/TSP.2015.2417507. ISSN 1053-587X. Bibcode: 2015ITSP...63.2650C.
- ↑ "libmusic: A powerful C library for spectral analysis". 2023. https://dataandsignal.com/software/libmusic.
- ↑ "libmusic_m : MATLAB implementation". 2023. https://dataandsignal.com/software/libmusic_m.
Further reading
- The estimation and tracking of frequency, Quinn and Hannan, Cambridge University Press 2001.
Original source: https://en.wikipedia.org/wiki/MUSIC (algorithm).
Read more |