基于多尺度特征解耦的超宽带雷达手势识别方法

      UWB radar gesture recognition method based on multi-scale feature decoupling

      • 摘要: 针对超宽带(ultra-wideband, UWB)雷达手势识别中回波信号微弱和多尺度特征耦合导致的识别准确率低、鲁棒性差的问题,提出了一种融合多尺度特征解耦与双通道协同注意力的识别方法。首先,采用动目标显示(moving target indication, MTI)与鲁棒主成分分析(robust principal component analysis, RPCA)相结合的杂波抑制流程,实现对静态与低速杂波的分级抑制,有效提升信号信噪比。然后,通过短时傅里叶变换(short-time Fourier transform, STFT)生成高分辨率的时频图像,并将其输入所提出的多尺度特征解耦网络(multi-scale decoupling feature network, MSDFNet)。该网络通过大尺度深度可分离卷积与小尺度标准卷积的并行设计显式分离全局运动与局部微动特征,实现多尺度特征解耦;并引入双通道注意力机制,自适应增强关键特征。实验结果表明,所提杂波抑制方法将平均信噪比提升10.8 dB,杂波抑制率达86.4%。在该预处理基础上,在自建数据集和公开数据集上模型的整体识别准确率分别为97.8%和98.1%,所提方法显著提升了手势识别的准确率与鲁棒性,为非接触式手势交互提供了一种高效可靠的解决方案。

         

        Abstract: Aiming at the problem of low recognition rate and poor robustness caused by weak echo signal and multi-scale feature coupling in ultra-wideband (UWB) radar gesture recognition, a recognition method combining multi-scale feature decoupling and dual-channel co-attention was proposed. The clutter suppression process combining moving target indication (MTI) and robust principal component analysis (RPCA) is used to achieve the hierarchical suppression of static and low-velocity clutter, which effectively improves the signal signal-to-noise ratio, and then generates a high-resolution time-frequency image by short-time Fourier transform (STFT). The multi-scale decoupling feature network (MSDFNet) is proposed, which explicitly separates global motion and local micro-motion features by parallel design of large-scale depthwise separable convolution and small-scale standard convolution to realize the decoupling of multi-scale features. A dual-channel attention mechanism is introduced to adaptively enhance key features. The experimental results show that the proposed clutter suppression method improves the average signal-to-noise ratio by 10.8 dB, and the clutter suppression rate reaches 86.4%. On the basis of the preprocessing, the overall recognition rates of the model on the self-built dataset and the public dataset are 97.8% and 98.1% respectively. The proposed method significantly improves the accuracy and robustness of gesture recognition, and provides an efficient and reliable solution for non-contact gesture interaction.

         

      /

      返回文章
      返回