SAR image coherence speckle suppression method based on edge-guided dual-branch network
-
摘要:
为进一步提升深度学习方法对合成孔径雷达(SAR)图像相干斑的抑制与边缘保持性能,提出了一种边缘引导的双分支网络相干斑抑制方法。构建了一种由边缘信息提取模块与双分支抑斑网络2部分构成的新型抑斑网络模型。采用密集级联方式构建边缘信息提取模块,增强模型的边缘感知能力;利用基于通道注意力的残差抑斑子网络(CARNet)、基于混合注意力的增强抑斑子网络(MAENet)及基于多分支并行的多尺度特征融合模块(MPMFFB)共同形成双分支抑斑网络,实现在相干斑抑制的同时更好地保护边缘细节。实验结果表明:与SAR-Transformer、HTNet等先进方法相比,所提方法具有更好的相干斑抑制与边缘保持性能;对仿真SAR图像,峰值信噪比、结构相似性、边缘保持指数分别平均提升0.96 dB、2.60%、0.60%;对真实SAR图像,等效视数提升14.12%以上,边缘保持指数平均提升4.52%。
Abstract:In order to further improve the coherence speckle suppression and edge preservation performance of synthetic aperture radar (SAR) images by deep learning methods, a coherence speckle suppression method based on edge-guided dual-branch network was proposed. The method constructed a new speckle suppression network model, which consisted of an edge information extraction block and a dual-branch speckle suppression network. Firstly, a dense cascade strategy was used to build the edge information extraction block to enhance the edge perception capability of the model. Secondly, the channel attention-based residual despeckling network (CARNet), the mixed attention-based enhanced despeckling network (MAENet), and the multi-branch parallel based multi-scale feature fusion block (MPMFFB) were used to form a dual-branch speckle suppression network, so as to better preserve edge details while suppressing coherence speckles. The experimental results show that the proposed method has better coherence speckle suppression and edge preservation performance compared with recent state-of-the-art methods such as SAR-Transformer and HTNet. For the simulated SAR images, the peak signal to noise ratio, structural similarity index measure, and edge preserve index are improved by 0.96 dB, 2.60%, and 0.60% on average, respectively. For the real SAR images, the equivalent number of looks is improved by more than 14.12%, and the edge preserve index is improved by 4.52% on average.
-
表 1 CARNet参数
Table 1. CARNet parameters
编码器/解码器 层级 网络层结构 编码器 L1 {conv3×3,64}×1 L2 {resblock,64}×4,
{strideconv2×2,128}×1L3 {resblock,128}×4,
{strideconv2×2,256}×1L4 {resblock,256}×4,
{strideconv2×2,512}×1L5 {resblock,512}×4,
{RCABlock,512}×4解码器 L6 {convtranspose2×2,256}×1,
{resblock,256}×4L7 {convtranspose2×2,128}×1,
{resblock,128}×4L8 {convtranspose2×2,64}×1,
{resblock,64}×4表 2 MAENet参数
Table 2. MAENet parameters
编码器/解码器 层级 网络层结构 编码器 L1 {conv3×3,64}×1 L2 {swin-conv block,64}×2,
{strideconv2×2,128}×1L3 {swin-conv block,128}×2,
{strideconv2×2,256}×1L4 {swin-conv block,256}×2,
{strideconv2×2,512}×1L5 {swin-conv block,512}×2,
{CBAM,512}×1解码器 L6 {convtranspose2×2,256}×1,
{swin-conv block,256}×2L7 {convtranspose2×2,128}×1,
{swin-conv block,128}×2L8 {convtranspose2×2,64}×1,
{swin-conv block,64}×2表 3 对仿真SAR图像抑斑的参数指标对比
Table 3. Parameter index comparison for speckle suppression of simulated SAR images
L 方法 PSNR/dB SSIM EPI Set12 Classic5 Kodak24 Set12 Classic5 Kodak24 Set12 Classic5 Kodak24 1 SAR-BM3D[21] 24.01 24.57 23.90 0.7670 0.7767 0.7914 0.9579 0.9426 0.9485 ID-CNN[9] 25.72 20.18 20.66 0.7671 0.6421 0.6422 0.9651 0.8696 0.8796 SAR-CNN[8] 26.10 26.04 26.64 0.7912 0.7876 0.8055 0.9682 0.9467 0.9594 MONet[10] 26.14 26.34 26.93 0.7937 0.7994 0.8168 0.9684 0.9491 0.9617 SAR-DCNN[11] 26.25 26.45 27.03 0.7966 0.8052 0.8220 0.9692 0.9501 0.9625 SAR-Transformer[12] 26.19 26.44 26.98 0.7999 0.8072 0.8234 0.9688 0.9499 0.9620 HTNet[13] 26.22 26.53 26.91 0.8083 0.8214 0.8367 0.9691 0.9494 0.9613 本文方法 27.17 27.57 28.00 0.8356 0.8631 0.8713 0.9753 0.9617 0.9701 4 SAR-BM3D[21] 27.85 28.40 28.01 0.8567 0.8845 0.8889 0.9807 0.9722 0.9749 ID-CNN[9] 28.96 28.99 29.50 0.8628 0.8831 0.8908 0.9837 0.9726 0.9790 SAR-CNN[8] 28.83 28.84 29.36 0.8627 0.8815 0.8902 0.9834 0.9723 0.9787 MONet[10] 29.20 29.29 29.73 0.8725 0.8918 0.8992 0.9831 0.9741 0.9799 SAR-DCNN[11] 29.31 29.39 29.82 0.8749 0.8950 0.9022 0.9849 0.9746 0.9803 SAR-Transformer[12] 29.10 29.20 29.61 0.8733 0.8949 0.9005 0.9842 0.9737 0.9794 HTNet[13] 29.13 29.18 29.65 0.8707 0.8886 0.8972 0.9843 0.9734 0.9796 本文方法 29.53 29.69 30.02 0.8839 0.9050 0.9092 0.9857 0.9761 0.9810 8 SAR-BM3D[21] 29.52 29.98 29.82 0.8883 0.9168 0.9222 0.9864 0.9797 0.9824 ID-CNN[9] 30.51 30.20 30.66 0.8948 0.9100 0.9159 0.9886 0.9796 0.9840 SAR-CNN[8] 30.69 30.81 31.24 0.9013 0.9226 0.9293 0.9891 0.9820 0.9859 MONet[10] 30.70 30.76 31.22 0.8996 0.9212 0.9281 0.9891 0.9819 0.9859 SAR-DCNN[11] 30.83 30.88 31.32 0.9031 0.9241 0.9301 0.9894 0.9822 0.9861 SAR-Transformer[12] 30.52 30.54 31.02 0.8999 0.9218 0.9271 0.9886 0.9811 0.9852 HTNet[13] 30.67 30.71 31.16 0.9002 0.9206 0.9274 0.9890 0.9816 0.9857 本文方法 31.04 31.12 31.50 0.9091 0.9304 0.9348 0.9899 0.9831 0.9866 16 SAR-BM3D[21] 31.27 31.58 31.68 0.9173 0.9429 0.9480 0.9907 0.9858 0.9882 ID-CNN[9] 32.21 31.38 31.89 0.9222 0.9342 0.9383 0.9923 0.9850 0.9881 SAR-CNN[8] 32.14 32.11 32.70 0.9225 0.9431 0.9490 0.9923 0.9873 0.9901 MONet[10] 32.32 32.30 32.91 0.9257 0.9453 0.9517 0.9926 0.9877 0.9905 SAR-DCNN[11] 32.30 32.39 33.02 0.9269 0.9468 0.9527 0.9927 0.9879 0.9906 SAR-Transformer[12] 32.08 31.99 32.60 0.9231 0.9443 0.9490 0.9921 0.9869 0.9898 HTNet[13] 32.32 32.28 32.88 0.9251 0.9451 0.9511 0.9925 0.9876 0.9904 本文方法 32.65 32.60 33.18 0.9311 0.9506 0.9558 0.9931 0.9883 0.9909 注:加粗数值为最优值。 表 4 对真实SAR图像抑斑的参数指标对比
Table 4. Parameter index comparison for speckle suppression of real SAR images
区域 方法 ENL EPI 区域A 区域B 1 SAR-BM3D[21] 67.10 4.73 0.8663 ID-CNN[9] 61.91 5.58 0.8864 SAR-CNN[8] 82.19 5.69 0.7903 MONet[10] 88.53 5.47 0.8427 SAR-DCNN[11] 124.49 5.65 0.8437 SAR-Transformer[12] 110.82 5.68 0.8414 HTNet[13] 115.75 5.72 0.8642 本文方法 146.81 5.72 0.8959 2 SAR-BM3D[21] 71.29 48.44 0.8610 ID-CNN[9] 23.91 200.09 0.8127 SAR-CNN[8] 66.73 114.66 0.8061 MONet[10] 60.92 220.34 0.8415 SAR-DCNN[11] 104.03 215.62 0.8364 SAR-Transformer[12] 66.13 153.32 0.8271 HTNet[13] 97.05 207.31 0.8630 本文方法 127.09 224.45 0.8752 3 SAR-BM3D[21] 115.69 76.93 0.8838 ID-CNN[9] 75.60 146.93 0.8605 SAR-CNN[8] 191.25 104.97 0.8391 MONet[10] 470.98 147.98 0.8560 SAR-DCNN[11] 484.92 162.35 0.8689 SAR-Transformer[12] 411.91 119.64 0.8404 HTNet[13] 581.17 150.81 0.8802 本文方法 822.13 164.44 0.9089 注:加粗数值为最优值。 表 5 不同模块设置实验结果
Table 5. Experimental results for different block settings
基准网络 DCEEB Attention MPMFFB PSNR/dB SSIM EPI √ 26.38 0.8126 0.9703 √ √ 26.88 0.8390 0.9736 √ √ √ 27.07 0.8339 0.9749 √ √ 26.39 0.8119 0.9703 √ √ √ 26.50 0.8152 0.9711 √ √ 26.40 0.8111 0.9703 √ √ √ 27.00 0.8334 0.9744 √ √ √ √ 27.17 0.8356 0.9753 表 6 单/双分支网络模型实验结果
Table 6. Experimental results of single/dual-branch network model
MAENet CARNet PSNR/dB SSIM EPI √ 29.37 0.8820 0.9852 √ 29.45 0.8823 0.9854 √ √ 29.53 0.8839 0.9857 表 7 不同双分支网络结构设置实验结果
Table 7. Experimental results of different dual-branch network structure settings
分支1 分支2 PSNR/dB SSIM EPI MAENet MAENet 26.79 0.8264 0.9727 CARNet CARNet 26.98 0.8336 0.9740 MAENet CARNet 27.10 0.8356 0.9747 CARNet MAENet 27.17 0.8361 0.9753 表 8 不同注意力机制设置数据指标结果
Table 8. Data index results of different attention mechanism settings
模型 PSNR/dB SSIM EPI Model 1 27.07 0.8338 0.9745 Model 2 27.05 0.8346 0.9747 Model 3 27.13 0.8352 0.9751 Model 4 27.17 0.8356 0.9753 -
[1] SINGH P, DIWAKAR M, SHANKAR A, et al. A review on SAR image and its despeckling[J]. Archives of Computational Methods in Engineering, 2021, 28(7): 4633-4653. doi: 10.1007/s11831-021-09548-z [2] 李敬曼, 朱磊, 张博, 等. 一种带约束搜索窗的非局部平均相干斑抑制算法[J]. 西安交通大学学报, 2020, 54(10): 54-62. doi: 10.7652/xjtuxb202010007LI J M, ZHU L, ZHANG B, et al. Non-local means suppression algorithm of speckle with restrained search window[J]. Journal of Xi’an Jiaotong University, 2020, 54(10): 54-62(in Chinese). doi: 10.7652/xjtuxb202010007 [3] PAN Y, MENG Y H, ZHU L. SAR image despeckling method based on improved Frost filtering[J]. Signal, Image and Video Processing, 2021, 15(4): 843-850. [4] 吴天琦, 肖文, 李仁剑, 等. 基于时域迭代小波变换的单分子定位图像背景去噪[J]. 中国激光, 2021, 48(13): 1307001. doi: 10.3788/CJL202148.1307001WU T Q, XIAO W, LI R J, et al. Single-molecule localization image background denoising based on time-domain iterative wavelet transform[J]. Chinese Journal of Lasers, 2021, 48(13): 1307001(in Chinese). doi: 10.3788/CJL202148.1307001 [5] CHU T Y, TAN Y M, LIU Q, et al. Novel fusion method for SAR and optical images based on non-subsampled shearlet transform[J]. International Journal of Remote Sensing, 2020, 41(12): 4590-4604. doi: 10.1080/01431161.2020.1723175 [6] YU Y J, ACTON S T. Speckle reducing anisotropic diffusion[J]. IEEE Transactions on Image Processing, 2002, 11(11): 1260-1270. doi: 10.1109/TIP.2002.804276 [7] AJA-FERNÁNDEZ S, ALBEROLA-LÓPEZ C. On the estimation of the coefficient of variation for anisotropic diffusion speckle filtering[J]. IEEE Transactions on Image Processing, 2006, 15(9): 2694-2701. doi: 10.1109/TIP.2006.877360 [8] CHIERCHIA G, COZZOLINO D, POGGI G, et al. SAR image despeckling through convolutional neural networks[C]//Proceedings of the IEEE International Geoscience and Remote Sensing Symposium. Piscataway: IEEE Press, 2017: 5438-5441. [9] WANG P Y, ZHANG H, PATEL V M. SAR image despeckling using a convolutional neural network[J]. IEEE Signal Processing Letters, 2017, 24(12): 1763-1767. doi: 10.1109/LSP.2017.2758203 [10] VITALE S, FERRAIOLI G, PASCAZIO V. Multi-objective CNN-based algorithm for SAR despeckling[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(11): 9336-9349. doi: 10.1109/TGRS.2020.3034852 [11] PASSAH A, AMITAB K, KANDAR D. SAR image despeckling using deep CNN[J]. IET Image Processing, 2021, 15(6): 1285-1297. doi: 10.1049/ipr2.12104 [12] PERERA M V, BANDARA W G C, VALANARASU J M J, et al. Transformer-based SAR image despeckling[C]//Proceedings of the IEEE International Geoscience and Remote Sensing Symposium. Piscataway: IEEE Press, 2022: 751-754. [13] CHENG L, GUO Z C, LI Y, et al. Two-stream multiplicative heavy-tail noise despeckling network with truncation loss[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5213817. [14] LIN H X, ZHUANG Y H, HUANG Y, et al. Self-supervised SAR despeckling powered by implicit deep denoiser prior[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 4514705. [15] 朱磊, 李敬曼, 潘杨, 等. 自适应调节滤波强度的 SAR 图像非局部平均抑斑算法[J]. 电子与信息学报, 2021, 43(5): 1258-1266. doi: 10.11999/JEIT200099ZHU L, LI J M, PAN Y, et al. SAR image despeckling algorithm using non-local means with adaptive filtering strength[J]. Journal of Electronics & Information Technology, 2021, 43(5): 1258-1266(in Chinese). doi: 10.11999/JEIT200099 [16] ZHANG Y L, LI K P, LI K, et al. Image super-resolution using very deep residual channel attention networks[C]//Proceedings of the European Conference on Computer Vision. Berlin: Springer, 2018: 294-310. [17] ZHANG K, LI Y W, LIANG J Y, et al. Practical blind image denoising via Swin-Conv-UNet and data synthesis[EB/OL]. (2022-03-24) [2023-05-31]. https://arxiv.org/abs/2203.13278v4. [18] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision. Berlin: Springer, 2018: 3-19. [19] SZEGEDY C, LIU W, JIA Y Q, et al. Going deeper with convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2015: 1-9. [20] LEE J S, HOPPEL K. Noise modeling and estimation of remotely-sensed images[C]//Proceedings of the 12th Canadian Symposium on Remote Sensing Geoscience and Remote Sensing Symposium. Piscataway: IEEE Press, 1989: 1005-1008. [21] PARRILLI S, PODERICO M, ANGELINO C V, et al. A nonlocal SAR image denoising algorithm based on LLMMSE wavelet shrinkage[J]. IEEE Transactions on Geoscience and Remote Sensing, 2012, 50(2): 606-616. doi: 10.1109/TGRS.2011.2161586 -