Direction of arrival (DOA) is important in many sensor systems such as radar, sonar, electronic surveillance, seismic exploration, and personal communication.
Most of high resolution direction of arrival methods are depend on the eigenvalues and eigenvectors of the received signal. So the principal components analysis (PCA) neural network is used to extract the eigenvalues and eigenvectors. The PCA neural network has an advantage over the QR, and power methods in that, its extract the eigenvalues and eigenvectors directly from the received signal without the need to compute the covariance matrix like the QR and power methods.
The Objective of this work is to use the signal subspace techniques to estimate DOA of the received signal with the help of the PCA neural network.
The signal subspace techniques are algorithms for estimation the direction of arrival. Which is a type of a parametric method for DOA estimation. Where the parametric method are classified as shown in next figure
Parametric Method Spectral Based
Beamforme r Technique
Conventiona l
Capon
Parametric Based
Subspace Based
MUSI C
DML
ESPRIT
SML
The conventional beamformer is a natural extension of classical Fourier based spectral analysis to sensor data. Its power spectrum is PBF =aH(θ)Ŕa(θ)/ (aH(θ)a(θ)) Where a(θ) is the steering matrix, and Ŕ is the covariance matrix
Its attempt to resolving the power of two sources spaced closer than beamwidth. Its power spectrum is PCAP=1/(aH(θ)R-1a(θ))
It’s the first method that dedicated to DOA estimation, which is depend on the noise subspace. Its power spectrum is PM=aH(θ)a(θ)/(aH(θ)∏ a(θ) Where ∏ =UnUnH, Un is the noisy eigenvectors.
Its required an extensive amounts of formulation and matrix manipulation, thus the block diagram below illustrate it.
^ H
Data Matrix
U
^
^
X
U
^
= UΛ
s ^
Eigendecomposition
U
∧ H
N
U L
U1
H
^
s ^
U
n
Computes
ψ
ψ =T φ T −1
)LS, or, TLS(
^
^
^
U 2 =U 1ψ
θ = cos )arg)φ ( / π( −1
m
m
2
If we take the signal covariance matrix RS instead of all the covariance matrix R (i.e., ignoring the noisy covariance matrix Rn ) of the Capon method then its power spectrum will be PMCAP=1/(aH(θ)RS-1a(θ)) Figures below shows the resolution of these algorithms for θ1=80º, θ2=82º, and a ULA of ten sensors
.DOA (deg) for the Capon method
.DOA (deg) for the MUSIC method
.DOA (deg) for the MCapon method
And for the ESPRIT algorithm is θ1=79.475º , θ2= 82.636º. From above figures it can be seen that the MCapon method give higher resolutions than the other methods.
The above algorithms need the number of the received signals. Thus
many
methods are used for estimation the number of signals such as the AIC, MDL, and their modification OSAIC, and OSMDL. The PCA NN is used with unsupervised algorithms such as: 1. Symmetric Subspace Algorithm 2. Generalized Hebbian algorithm (GHA) 3. Adaptive Principal Components Extraction Algorithm (APEX) 4. Cascade Recursive Least Square Algorithm (CRLS)
The proposed model for DOA estimation is shown in figure below ULA or UCA
PCA
Estimation of the number of the received signals DOA Estimation Output angle
Where the first block represents the geometry of sensors, which is either a uniform linear array or a uniform circular array as shown in figure below y
. 0
d
2d
.
.
x
L-1)d) .Uniform linear array geometry z Ith plane wave Ф
θ
L/2 R
y .Uniform circular array geometry
x
The second block represents the PCA neural network with complex algorithms to extract the principal components of the impeding signals on the ULA or UCA, that in turn are used by the third block for estimating the number of sources. The fourth block represents the subspace algorithms for DOA estimation.
A single source of energy that illuminating a UCA of eight sensors will be presents. The source with elevation angle θ=30º, azimuth angle Ф=50º, fm/fS=.1, and SNR=10 db. When
training the PCA neural network with the GHA, APEX, CRLS
algorithms and with an adaptive learning rate then after the synaptic weights reach their steady state as shown in figures below, and after satisfying the orthogonality condition that is W1W1H=1 for GHA
algorithm
W1W1H=1 for APEX algorithm W1W1H=1 for CRLS algorithm
Change of the synaptic weights versus the number of iterations for the complex GHA with adaptive learning rate, j=1, 2, . . ., 8.
Change of the synaptic weights versus the number of iterations for the complex APEX with learning rate=.01, j=1, 2, . . . ,8.
Change of the synaptic weights versus the number of iterations for the complex CRLS with learning rate=.01, j=1, 2, . . . ,8.
Then y j → λ j , and Wj → q j for j=1, 2, …,8, where yj, Wj is the output and the synaptic weights of the PCA neural network, and λj, and qj is the eigenvalues and eigenvectors. From the eigenvalues, and eigenvectors of GHA, APEX, and CRLS it can be seen that λ × 100 = 92.53 % For GHA algorithm ∑λ λ × 100 = 85.76 % For APEX algorithm ∑λ 1
8
i
i =1
1
8
i
i =1
λ
× 100 = 98.03 %
1
8
∑λ i =1
For CRLS algorithm
i
Hence the first eigenvalue and its eigenvectors represents the principal components. In the other hand there is only one source. Also from the eigenvalues and eigenvectors of these algorithms it can be seen that ( R − y y w w ) / R × 100% = 42% For GHA algorithm *
s
(R (R
1
1
H
1
1
s
s
− y y w w ) / R × 100% = 50.11% For APEX algorithm
s
− y y w w ) / R × 100% = 10.45% For CRLS algorithm
*
1
1
H
1
*
1
1
1
s
H
1
1
s
from above it can be seen that the GHA, and APEX give high reconstruction error, while the CRLS give acceptable reconstruction error. This is because the synaptic weights tend to deconverge the calculation of the principal components of small eigenvalues. Now RS can be computed RS=y1y1W1WH1 Then applied RS to MCapon method as shown in figures below
DOA using the principal DOA using the principal component of the complex GHA component of the complex APEX algorithm. algorithm.
DOA using the principal component of the complex CRLS algorithm.
It is easy to see the effect of the noise on the DOA of both the GHA, and APEX algorithms due to the high reconstruction error, while the CRLS gives a correct DOA.
The power full of the CRLS algorithm is that the extraction of the principal components is carried out from the error vector, i.e., not directly from the input vector as in the GHA and APEX algorithm.
From the studied cases, and the simulation results presented in this work, the following conclusions can be pointed as follows: 1. A modification to the Capon method has been presented, which give higher resolution. 2. The numbers of sources are computed directly from the output of the PCA neural network instead of using the Maximum Description Length (MDL), Akaike's information criteria (AIC), order statistic maximum description length (OSMDL), or order statistic Akaike's information criteria (OSAIC ) algorithms. 3. An on-line unsupervised learning algorithm for extracting of complex valued principal components such as complex generalized Hebbian algorithm, complex adaptive principal extraction, complex cascade recursive least square have been derived.
4. The principal components are computed directly from the input signals instead of computing it from the covariance matrix.
5. This work can be especially useful for nonstationary signals, i.e. in the case of the updating of the eigenvectors is slow for the new arriving samples.
6. The maximum number of signals that can be estimated is less or equal to the number of sensors. That is, the DOA can not be estimated if the number of signals is greater than the number of sensors.
Hardware implementation for the PCA neural network using filed programmable gate array (FPGA), which gives a fast multiplication, leading to fast updating of the neural network. Using a technique to estimate the DOA, frequency, and velocity. Such as the maximum likelihood method. Using the nonparametric method that depends on the fast Fourier transform, and the wavelet transform for the estimation of the DOA.