z-logo
open-access-imgOpen Access
Direction of arrival estimation in passive radar based on deep neural network
Author(s) -
Lyu Xiaoyong,
Wang Jun
Publication year - 2021
Publication title -
iet signal processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.384
H-Index - 42
eISSN - 1751-9683
pISSN - 1751-9675
DOI - 10.1049/sil2.12065
Subject(s) - computer science , direction of arrival , radar , clutter , artificial intelligence , artificial neural network , perceptron , covariance matrix , antenna array , algorithm , antenna (radio) , pattern recognition (psychology) , telecommunications
Most traditional direction of arrival (DOA) estimation methods in passive radar are based on the parametric model of the antenna array manifold, and lack the adaption to the array errors. The data‐driven machine learning‐based methods have great array error adaption capability. However, most existing machine learning‐based methods cannot be applied directly to the passive radar DOA estimation, because the array covariance matrix that they use as the input is not easy to estimate with adequate accuracy in passive radar owing to the poor target signal to clutter plus noise ratio (SCNR). A deep learning‐based method for DOA estimation in passive radar is proposed here. Clutter cancelation and range–Doppler cross‐correlation (RDCC) is performed to increase the target SCNR. The RDCC result is taken as the input of the deep learning method, and the amplitude and phase uncertainties of the RDCC result are treated. A two‐stage deep neural network (DNN) is designed. The first stage determines the spatial sub‐region of the target, and the second stage gets the refined DOA estimation. Simulations show that the proposed two‐stage DNN well outperforms the traditional passive radar DOA estimation method and the multi‐layer perceptron network. Real experiments verify the superiority of the proposed method.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here