Engineering:Sensor array
A sensor array is a group of sensors, usually deployed in a certain geometry pattern, used for collecting and processing electromagnetic or acoustic signals. The advantage of using a sensor array over using a single sensor lies in the fact that an array adds new dimensions to the observation, helping to estimate more parameters and improve the estimation performance. For example an array of radio antenna elements used for beamforming can increase antenna gain in the direction of the signal while decreasing the gain in other directions, i.e., increasing signal-to-noise ratio (SNR) by amplifying the signal coherently. Another example of sensor array application is to estimate the direction of arrival of impinging electromagnetic waves. The related processing method is called array signal processing. A third examples includes chemical sensor arrays, which utilize multiple chemical sensors for fingerprint detection in complex mixtures or sensing environments. Application examples of array signal processing include radar/sonar, wireless communications, seismology, machine condition monitoring, astronomical observations fault diagnosis, etc.
Using array signal processing, the temporal and spatial properties (or parameters) of the impinging signals interfered by noise and hidden in the data collected by the sensor array can be estimated and revealed. This is known as parameter estimation.
Plane wave, time domain beamforming
Figure 1 illustrates a six-element uniform linear array (ULA). In this example, the sensor array is assumed to be in the far-field of a signal source so that it can be treated as planar wave.
Parameter estimation takes advantage of the fact that the distance from the source to each antenna in the array is different, which means that the input data at each antenna will be phase-shifted replicas of each other. Eq. (1) shows the calculation for the extra time it takes to reach each antenna in the array relative to the first one, where c is the velocity of the wave.
[math]\displaystyle{ \Delta t_i = \frac{(i-1)d \cos \theta}{c}, i = 1, 2, ..., M \ \ (1) }[/math]
Each sensor is associated with a different delay. The delays are small but not trivial. In frequency domain, they are displayed as phase shift among the signals received by the sensors. The delays are closely related to the incident angle and the geometry of the sensor array. Given the geometry of the array, the delays or phase differences can be used to estimate the incident angle. Eq. (1) is the mathematical basis behind array signal processing. Simply summing the signals received by the sensors and calculating the mean value give the result
[math]\displaystyle{ y = \frac{1}{M}\sum_{i=1}^{M} \boldsymbol x_i (t-\Delta t_i) \ \ (2) }[/math] .
Because the received signals are out of phase, this mean value does not give an enhanced signal compared with the original source. Heuristically, if we can find delays of each of the received signals and remove them prior to the summation, the mean value
[math]\displaystyle{ y = \frac{1}{M}\sum_{i=1}^{M} \boldsymbol x_i (t) \ \ (3) }[/math]
will result in an enhanced signal. The process of time-shifting signals using a well selected set of delays for each channel of the sensor array so that the signal is added constructively is called beamforming. In addition to the delay-and-sum approach described above, a number of spectral based (non-parametric) approaches and parametric approaches exist which improve various performance metrics. These beamforming algorithms are briefly described as follows .
Array design
Sensor arrays have different geometrical designs, including linear, circular, planar, cylindrical and spherical arrays. There are sensor arrays with arbitrary array configuration, which require more complex signal processing techniques for parameter estimation. In uniform linear array (ULA) the phase of the incoming signal [math]\displaystyle{ \omega\tau }[/math] should be limited to [math]\displaystyle{ \pm\pi }[/math] to avoid grating waves. It means that for angle of arrival [math]\displaystyle{ \theta }[/math] in the interval [math]\displaystyle{ [-\frac{\pi}{2},\frac{\pi}{2}] }[/math] sensor spacing should be smaller than half the wavelength [math]\displaystyle{ d \leq \lambda/2 }[/math]. However, the width of the main beam, i.e., the resolution or directivity of the array, is determined by the length of the array compared to the wavelength. In order to have a decent directional resolution the length of the array should be several times larger than the radio wavelength.
Types of sensor arrays
Antenna array
- Antenna array (electromagnetic), a geometrical arrangement of antenna elements with a deliberate relationship between their currents, forming a single antenna usually to achieve a desired radiation pattern
- Directional array, an antenna array optimized for directionality
- Phased array, An antenna array where the phase shifts (and amplitudes) applied to the elements are modified electronically, typically in order to steer the antenna system's directional pattern, without the use of moving parts
- Smart antenna, a phased array in which a signal processor computes phase shifts to optimize reception and/or transmission to a receiver on the fly, such as is performed by cellular telephone towers
- Digital antenna array, this is smart antenna with multi channels digital beamforming, usually by using FFT.
- Interferometric array of radio telescopes or optical telescopes, used to achieve high resolution through interferometric correlation
- Watson-Watt / Adcock antenna array, using the Watson-Watt technique whereby two Adcock antenna pairs are used to perform an amplitude comparison on the incoming signal
Acoustic arrays
- Microphone array is used in acoustic measurement and beamforming
- Loudspeaker array is used in acoustic measurement and beamforming
Other arrays
- Geophone array used in Reflection seismology
- Sonar array is an array of hydrophones used in underwater imaging
- Image sensors for digital cameras
Delay-and-sum beamforming
If a time delay is added to the recorded signal from each microphone that is equal and opposite of the delay caused by the additional travel time, it will result in signals that are perfectly in-phase with each other. Summing these in-phase signals will result in constructive interference that will amplify the SNR by the number of antennas in the array. This is known as delay-and-sum beamforming. For direction of arrival (DOA) estimation, one can iteratively test time delays for all possible directions. If the guess is wrong, the signal will be interfered destructively, resulting in a diminished output signal, but the correct guess will result in the signal amplification described above.
The problem is, before the incident angle is estimated, how could it be possible to know the time delay that is 'equal' and opposite of the delay caused by the extra travel time? It is impossible. The solution is to try a series of angles [math]\displaystyle{ \hat{\theta} \in [0, \pi] }[/math] at sufficiently high resolution, and calculate the resulting mean output signal of the array using Eq. (3). The trial angle that maximizes the mean output is an estimation of DOA given by the delay-and-sum beamformer. Adding an opposite delay to the input signals is equivalent to rotating the sensor array physically. Therefore, it is also known as beam steering.
Spectrum-based beamforming
Delay and sum beamforming is a time domain approach. It is simple to implement, but it may poorly estimate direction of arrival (DOA). The solution to this is a frequency domain approach. The Fourier transform transforms the signal from the time domain to the frequency domain. This converts the time delay between adjacent sensors into a phase shift. Thus, the array output vector at any time t can be denoted as [math]\displaystyle{ \boldsymbol x(t) = x_1(t)\begin{bmatrix} 1 & e^{-j\omega\Delta t} & \cdots & e^{-j\omega(M-1)\Delta t} \end{bmatrix}^T }[/math], where [math]\displaystyle{ x_1(t) }[/math] stands for the signal received by the first sensor. Frequency domain beamforming algorithms use the spatial covariance matrix, represented by [math]\displaystyle{ \boldsymbol R=E\{ \boldsymbol x(t) \boldsymbol x^T(t)\} }[/math]. This M by M matrix carries the spatial and spectral information of the incoming signals. Assuming zero-mean Gaussian white noise, the basic model of the spatial covariance matrix is given by
[math]\displaystyle{ \boldsymbol R = \boldsymbol V \boldsymbol S \boldsymbol V^H + \sigma^2 \boldsymbol I \ \ (4) }[/math]
where [math]\displaystyle{ \sigma^2 }[/math] is the variance of the white noise, [math]\displaystyle{ \boldsymbol I }[/math] is the identity matrix and [math]\displaystyle{ \boldsymbol V }[/math] is the array manifold vector [math]\displaystyle{ \boldsymbol V = \begin{bmatrix} \boldsymbol v_1 & \cdots & \boldsymbol v_k \end{bmatrix}^T }[/math] with [math]\displaystyle{ \boldsymbol v_i = \begin{bmatrix} 1 & e^{-j\omega\Delta t_i} & \cdots & e^{-j\omega(M-1)\Delta t_i} \end{bmatrix}^T }[/math]. This model is of central importance in frequency domain beamforming algorithms.
Some spectrum-based beamforming approaches are listed below.
Conventional (Bartlett) beamformer
The Bartlett beamformer is a natural extension of conventional spectral analysis (spectrogram) to the sensor array. Its spectral power is represented by
[math]\displaystyle{ \hat{P}_{Bartlett}(\theta)=\boldsymbol v^H \boldsymbol R \boldsymbol v \ \ (5) }[/math].
The angle that maximizes this power is an estimation of the angle of arrival.
MVDR (Capon) beamformer
The Minimum Variance Distortionless Response beamformer, also known as the Capon beamforming algorithm,[1] has a power given by
[math]\displaystyle{ \hat{P}_{Capon}(\theta)=\frac{1}{\boldsymbol v^H \boldsymbol R^{-1} \boldsymbol v} \ \ (6) }[/math].
Though the MVDR/Capon beamformer can achieve better resolution than the conventional (Bartlett) approach, this algorithm has higher complexity due to the full-rank matrix inversion. Technical advances in GPU computing have begun to narrow this gap and make real-time Capon beamforming possible.[2]
MUSIC beamformer
MUSIC (MUltiple SIgnal Classification) beamforming algorithm starts with decomposing the covariance matrix as given by Eq. (4) for both the signal part and the noise part. The eigen-decomposition is represented by
[math]\displaystyle{ \boldsymbol R = \boldsymbol U_s \boldsymbol \Lambda_s \boldsymbol U_s^H + \boldsymbol U_n \boldsymbol \Lambda_n \boldsymbol U_n^H \ \ (7) }[/math].
MUSIC uses the noise sub-space of the spatial covariance matrix in the denominator of the Capon algorithm
[math]\displaystyle{ \hat{P}_{MUSIC}(\theta)=\frac{1}{\boldsymbol v^H \boldsymbol U_n \boldsymbol U_n^H\boldsymbol v} \ \ (8) }[/math].
Therefore MUSIC beamformer is also known as subspace beamformer. Compared to the Capon beamformer, it gives much better DOA estimation.
SAMV beamformer
SAMV beamforming algorithm is a sparse signal reconstruction based algorithm which explicitly exploits the time invariant statistical characteristic of the covariance matrix. It achieves superresolution and robust to highly correlated signals.
Parametric beamformers
One of the major advantages of the spectrum based beamformers is a lower computational complexity, but they may not give accurate DOA estimation if the signals are correlated or coherent. An alternative approach are parametric beamformers, also known as maximum likelihood (ML) beamformers. One example of a maximum likelihood method commonly used in engineering is the least squares method. In the least square approach, a quadratic penalty function is used. To get the minimum value (or least squared error) of the quadratic penalty function (or objective function), take its derivative (which is linear), let it equal zero and solve a system of linear equations.
In ML beamformers the quadratic penalty function is used to the spatial covariance matrix and the signal model. One example of ML beamformer penalty function is
[math]\displaystyle{ L_{ML}(\theta)=\|\hat{\boldsymbol R}- \boldsymbol R\|_F^2 = \|\hat{\boldsymbol R}-( \boldsymbol V \boldsymbol S \boldsymbol V^H + \sigma^2 \boldsymbol I )\|_F^2 \ \ (9) }[/math] ,
where [math]\displaystyle{ \| \cdot \|_F }[/math] is the Frobenius norm. It can be seen in Eq. (4) that the penalty function of Eq. (9) is minimized by approximating the signal model to the sample covariance matrix as accurate as possible. In other words, the maximum likelihood beamformer is to find the DOA [math]\displaystyle{ \theta }[/math], the independent variable of matrix [math]\displaystyle{ \boldsymbol V }[/math], so that the penalty function in Eq. (9) is minimized. In practice, the penalty function may look different, depending on the signal and noise model. For this reason, there are two major categories of maximum likelihood beamformers: Deterministic ML beamformers and stochastic ML beamformers, corresponding to a deterministic and a stochastic model, respectively.
Another idea to change the former penalty equation is the consideration of simplifying the minimization by differentiation of the penalty function. In order to simplify the optimization algorithm, logarithmic operations and the probability density function (PDF) of the observations may be used in some ML beamformers.
The optimizing problem is solved by finding the roots of the derivative of the penalty function after equating it with zero. Because the equation is non-linear a numerical searching approach such as Newton–Raphson method is usually employed. The Newton–Raphson method is an iterative root search method with the iteration
[math]\displaystyle{ x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} \ \ (10) }[/math].
The search starts from an initial guess [math]\displaystyle{ x_0 }[/math]. If the Newton-Raphson search method is employed to minimize the beamforming penalty function, the resulting beamformer is called Newton ML beamformer. Several well-known ML beamformers are described below without providing further details due to the complexity of the expressions.
- Deterministic maximum likelihood beamformer
- In deterministic maximum likelihood beamformer (DML), the noise is modeled as a stationary Gaussian white random processes while the signal waveform as deterministic (but arbitrary) and unknown.
- Stochastic maximum likelihood beamformer
- In stochastic maximum likelihood beamformer (SML), the noise is modeled as stationary Gaussian white random processes (the same as in DML) whereas the signal waveform as Gaussian random processes.
- Method of direction estimation
- Method of direction estimation (MODE) is subspace maximum likelihood beamformer, just as MUSIC, is the subspace spectral based beamformer. Subspace ML beamforming is obtained by eigen-decomposition of the sample covariance matrix.
References
- ↑ Capon, J. (1969). "High-resolution frequency-wavenumber spectrum analysis". Proceedings of the IEEE 57 (8): 1408–1418. doi:10.1109/PROC.1969.7278.
- ↑ Asen, Jon Petter; Buskenes, Jo Inge; Nilsen, Carl-Inge Colombo; Austeng, Andreas; Holm, Sverre (2014). "Implementing capon beamforming on a GPU for real-time cardiac ultrasound imaging". IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control 61 (1): 76–85. doi:10.1109/TUFFC.2014.6689777. PMID 24402897.
Further reading
- Van Trees, Harry L. (2002). Detection, estimation, and modulation theory. 4: Optimum array processing. New York, NY: Wiley. doi:10.1002/0471221104. ISBN 9780471093909.
- H. Krim and M. Viberg, “Two decades of array signal processing research”, IEEE Transactions on Signal Processing Magazine, July 1996
- S. Haykin, Ed., “Array Signal Processing”, Eaglewood Cliffs, NJ: Prentice-Hall, 1985
- S. U. Pillai, “Array Signal Processing”, New York: Springer-Verlag, 1989
- P. Stoica and R. Moses, “Introduction to Spectral Analysis", Prentice-Hall, Englewood Cliffs, USA, 1997. available for download.
- J. Li and P. Stoica, “Robust Adaptive Beamforming", John Wiley, 2006.
- J. Cadzow, “Multiple Source Location—The Signal Subspace Approach”, IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. 38, No. 7, July 1990
- G. Bienvenu and L. Kopp, “Optimality of high resolution array processing using the eigensystem approach”, IEEE Transactions on Acoustics, Speech and Signal Process, Vol. ASSP-31, pp. 1234–1248, October 1983
- I. Ziskind and M. Wax, “Maximum likelihood localization of multiple sources by alternating projection”, IEEE Transactions on Acoustics, Speech and Signal Process, Vol. ASSP-36, pp. 1553–1560, October 1988
- B. Ottersten, M. Verberg, P. Stoica, and A. Nehorai, “Exact and large sample maximum likelihood techniques for parameter estimation and detection in array processing”, Radar Array Processing, Springer-Verlag, Berlin, pp. 99–151, 1993
- M. Viberg, B. Ottersten, and T. Kailath, “Detection and estimation in sensor arrays using weighted subspace fitting”, IEEE Transactions on Signal Processing, vol. SP-39, pp 2346–2449, November 1991
- Feder, M.; Weinstein, E. (April 1988). "Parameter estimation of superimposed signals using the EM algorithm". IEEE Transactions on Acoustics, Speech, and Signal Processing 36 (4): 477–489. doi:10.1109/29.1552.
- Y. Bresler and Macovski, “Exact maximum likelihood parameter estimation of superimposed exponential signals in noise”, IEEE Transactions on Acoustic, Speech and Signal Proceeding, vol ASSP-34, pp. 1081–1089, October 1986
- R. O. Schmidt, “New mathematical tools in direction finding and spectral analysis”, Proceedings of SPIE 27th Annual Symposium, San Diego, California, August 1983
Original source: https://en.wikipedia.org/wiki/Sensor array.
Read more |