Abstract:
Automatic modulation recognition (AMR) is a key technology in cognitive radio, intelligent communications, and signal reconnaissance. Although deep learning has advanced this field, convolutional neural networks still suffer from limited robustness, weak frequency-domain modeling capability, and insufficient multi-scale temporal perception under complex channel conditions. To address these issues, this thesis proposes a neural-network-based AMR model, the time-frequency aligned and multi-scale fusion network (TFFNet), which incorporates frequency-domain correction and multi-scale feature fusion. TFFNet constructs a coupling pathway between the time and frequency domains to perform fine-grained spectral correction, introduces a multi-scale feature perception mechanism to enhance the extraction of both local and global structures, and integrates a lightweight attention module to improve inter-channel semantic fusion, thereby better adapting to modulation pattern variations in non-ideal channel environments. Experiments on the open-source datasets RML2016.10a and RML2016.10b show that TFFNet achieves overall accuracies of 63.14% and 65.51%, with peak accuracies of 93.16% and 94.13%, respectively, outperforming several mainstream deep learning models across multiple evaluation metrics. These results demonstrate that spectral correction and multi-scale modeling play an effective role in AMR, and confirm the strong application potential of TFFNet.