留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于动态特征选择的遥感图像目标检测算法

陈超 赵巍

陈超,赵巍. 基于动态特征选择的遥感图像目标检测算法[J]. 北京航空航天大学学报,2023,49(3):702-709 doi: 10.13700/j.bh.1001-5965.2021.0300
引用本文: 陈超,赵巍. 基于动态特征选择的遥感图像目标检测算法[J]. 北京航空航天大学学报,2023,49(3):702-709 doi: 10.13700/j.bh.1001-5965.2021.0300
CHEN C,ZHAO W. Remote sensing target detection based on dynanic feature selection[J]. Journal of Beijing University of Aeronautics and Astronautics,2023,49(3):702-709 (in Chinese) doi: 10.13700/j.bh.1001-5965.2021.0300
Citation: CHEN C,ZHAO W. Remote sensing target detection based on dynanic feature selection[J]. Journal of Beijing University of Aeronautics and Astronautics,2023,49(3):702-709 (in Chinese) doi: 10.13700/j.bh.1001-5965.2021.0300

基于动态特征选择的遥感图像目标检测算法

doi: 10.13700/j.bh.1001-5965.2021.0300
详细信息
    作者简介:

    陈超:男,硕士研究生。主要研究方向:计算机视觉与识别技术

    赵巍:女,博士,教授,博士生导师。主要研究方向:图像处理、计算机视觉与识别技术

    通讯作者:

    E-mail:zhao29wei@263.net

  • 中图分类号: TP391.4

Remote sensing target detection based on dynanic feature selection

More Information
  • 摘要:

    在遥感图像目标检测领域内,旋转物体的检测存在挑战,卷积神经网络在提取信息时会受制于固定的空间结构,采样点无法聚焦于目标;遥感图像尺度变化大,不同物体需要具有不同尺度感受野的特征映射,具有单一尺度感受野的特征映射无法包含所有有效信息。基于此,提出了可变形对齐卷积,根据候选边框调节采样点,并根据特征映射学习采样点的细微偏移,使采样点聚焦于目标,从而实现动态特征选择;同时提出了基于可变形对齐卷积的感受野自适应模块,对具有不同尺度感受野的特征映射进行融合,自适应地调整神经元的感受野。在公开数据集上的大量实验验证了所提算法可以提高遥感图像目标检测的精度。

     

  • 图 1  DFSNet总体结构

    Figure 1.  Overall structure of DFSNet

    图 2  RetinaNet总体结构

    Figure 2.  Overall structure of RetinaNet

    图 3  两种输出边框

    Figure 3.  Two output bounding boxes

    图 4  核尺寸为3×3的不同卷积的采样点对比

    Figure 4.  Comparison of sampling locations of different convolutions with kernel size of 3×3

    图 5  经过不同卷积的舰船采样点

    Figure 5.  Ship’s sampling locations after different convolutions

    图 6  动态特征选择层

    Figure 6.  Dynamic feature selection layer

    图 7  感受野自适应模块

    Figure 7.  Receptive field adaptive module

    图 8  IoU的解释

    Figure 8.  Explanation of IoU

    图 9  DOTA数据集可视化结果

    Figure 9.  Visual results on DOTA

    表  1  可变形对齐卷积与其他卷积对比

    Table  1.   Comparison between deformable alignment convolution and other convolutions

    方法mAP/%
    标准卷积71.17
    可变形卷积71.68
    对齐卷积72.45
    可变形对齐卷积73.18
    下载: 导出CSV

    表  2  DFSNet的消融实验对比

    Table  2.   Ablation studies of DFSNet

    方法感兴趣区域
    转换模块
    动态特征
    选择层
    感受野自
    适应模块
    mAP
    基线68.05
    DFSNet的
    不同设置
    71.17
    73.18
    74.04
    下载: 导出CSV

    表  3  DFSNet与其他模型在DOTA数据集上的对比结果

    Table  3.   Comparison of DFSNet and other methods on DOTA

    双/单阶段模型AP/%mAP/%
    PLBDBRGTFSV
    双阶段FR-O[2]79.4277.1317.7064.0535.3054.72
    RoI Trans former[8]88.6478.5243.4475.9268.8171.10
    CAD-Net[20]87.8082.4049.4073.5071.1072.84
    SCRDet[21]89.9880.6552.0968.3668.3671.89
    单阶段RetinaNet88.8281.7444.4465.7267.1169.57
    DRN[16]88.9180.2243.5263.3573.4869.90
    R3Det[22]89.5481.9948.4662.5270.4870.60
    DFSNet89.1277.4052.0573.4778.0274.01
    双/单阶段模型AP/%mAP/%
    LVSHTCBCST
    双阶段FR-O[2]38.0237.1689.4169.6459.2858.70
    RoI Trans former[8]73.6883.5990.7477.2781.4681.35
    CAD-Net[20]63.5076.6090.9079.2073.3076.70
    SCRDet[21]60.3272.4190.8587.9486.8679.68
    单阶段RetinaNet55.8272.7790.5582.8376.3075.65
    DRN[16]70.6989.9490.1483.8584.1183.75
    R3Det[22]74.2977.5490.8081.3983.5481.51
    DFSNet79.3187.3590.9085.1384.9085.52
    双/单阶段模型AP/%mAP/%
    SBFRAHASPHC
    双阶段FR-O[2]50.3052.9147.8947.4046.3048.96
    RoI Trans former[8]58.3953.5462.8358.9347.6756.27
    CAD-Net[20]48.4060.9062.0067.0062.2060.10
    SCRDet[21]65.0266.6866.2568.2465.2166.28
    单阶段RetinaNet54.1963.6463.7169.7353.3760.93
    DRN[16]50.1258.4167.6268.6052.5059.45
    R3Det[22]61.9759.8265.4467.4660.0562.95
    DFSNet60.9063.8367.3167.5653.3562.59
    下载: 导出CSV
  • [1] 王彦情, 马雷, 田原. 光学遥感图像舰船目标检测与识别综述[J]. 自动化学报, 2011, 37(9): 1029-1039.

    WANG Y Q, MA L, TIAN Y. State-of-the-art of ship detection and recognition in optical remotely sensed imagery[J]. Acta Automatica Sinica, 2011, 37(9): 1029-1039(in Chinese).
    [2] XIA G S, BAI X, DING J, et al. DOTA: A large-scale dataset for object detection in aerial images[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2018: 3974-3983.
    [3] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic seg-mentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2014: 580-587.
    [4] GIRSHICK R. Fast R-CNN[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2015: 1440-1448.
    [5] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. doi: 10.1109/TPAMI.2016.2577031
    [6] HE K M, GKIOXARI G, DOLLÁR P, et al. Mask R-CNN[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2017: 2961-2969.
    [7] DAI J F, QI H Z, XIONG Y W, et al. Deformable convolutional networks[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2017: 764-773.
    [8] DING J, XUE N, LONG Y, et al. Learning RoI Transformer for oriented object detection in aerial images[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2019: 2849-2858.
    [9] LIU Z K, HU J G, WENG L B, et al. Rotated region based CNN for ship detection[C]//2017 IEEE International Conference on Image Processing(ICIP). Piscataway: IEEE Press, 2017: 900-904.
    [10] MA J Q, SHAO W Y, YE H, et al. Arbitrary-oriented scene text detection via rotation proposals[J]. IEEE Transactions on Multimedia, 2018, 20(11): 3111-3122. doi: 10.1109/TMM.2018.2818020
    [11] HAN J M, DING J, LI J, et al. Align deep features for oriented object detection[EB/OL]. (2021-07-12)[2021-07-12].https://arxiv. org/abs/2008.09397.
    [12] LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2017: 2117-2125.
    [13] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2017: 2980-2988.
    [14] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2016: 770-778.
    [15] LI Y H, CHEN Y T, WANG N Y, et al. Scale-aware trident networks for object detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE Press, 2019: 6054-6063.
    [16] PAN X J, REN Y Q, SHENG K K, et al. Dynamic refinement network for oriented and densely packed object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2020: 11207-11216.
    [17] IOFFE S , SZEGEDY C. Batch Normalization: Accelerating deep network training by reducing internal covariate shift[C]//International Conference on Machine Learning, 2015: 448-456.
    [18] TAN M X, PANG R M, LE Q V. EfficientDet: Scalable and efficient object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2020: 10781-10790.
    [19] PASZKE A, GROSS S, MASSA F, et al. PyTorch: An imperative style, high-performance deep learning library[EB/OL]. (2019-12-03)[2021-06-01].https://arxiv. org/abs/11912.01703.
    [20] ZHANG G J, LU S J, ZHANG W. CAD-Net: A context-aware detection network for objects in remote sensing imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(12): 10015-10024. doi: 10.1109/TGRS.2019.2930982
    [21] YANG X, YANG J R, YAN J C, et al. SCRDet: Towards more robust detection for small, cluttered and rotated objects[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE Press, 2019: 8232-8241.
    [22] YANG X, LIU Q Q, YAN J C, et al. R3Det: Refined single-stage detector with feature refinement for rotating object[EB/OL]. (2020-12-08)[202-06-01].https://arxiv.org/abs/1908.05612v1.
  • 加载中
图(9) / 表(3)
计量
  • 文章访问数:  386
  • HTML全文浏览量:  93
  • PDF下载量:  52
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-06-04
  • 录用日期:  2021-08-13
  • 网络出版日期:  2021-09-09
  • 整期出版日期:  2023-03-30

目录

    /

    返回文章
    返回
    常见问答