Volume 47 Issue 8
Aug.  2021
Turn off MathJax
Article Contents
BAO Zhuangzhuang, ZHAO Xuejun. Ship detector in SAR images based on EfficientDet without pre-training[J]. Journal of Beijing University of Aeronautics and Astronautics, 2021, 47(8): 1664-1672. doi: 10.13700/j.bh.1001-5965.2020.0255(in Chinese)
Citation: BAO Zhuangzhuang, ZHAO Xuejun. Ship detector in SAR images based on EfficientDet without pre-training[J]. Journal of Beijing University of Aeronautics and Astronautics, 2021, 47(8): 1664-1672. doi: 10.13700/j.bh.1001-5965.2020.0255(in Chinese)

Ship detector in SAR images based on EfficientDet without pre-training

doi: 10.13700/j.bh.1001-5965.2020.0255
More Information
  • Corresponding author: ZHAO Xuejun. E-mail: 292457155@qq.com
  • Received Date: 11 Jun 2020
  • Accepted Date: 04 Sep 2020
  • Publish Date: 20 Aug 2021
  • Aiming at the problem of multi-scale and multi-scene Synthetic Aperture Radar (SAR) ship detection, an object detector without pre-training based on EfficientDet is proposed. The existing SAR image ship detectors based on convolutional neural networks do not show excellent performance that it should have. One of the important reasons is that they depend on the pre-training model of the classification tasks, and there is no effective method to solve the difference between the SAR image and the natural scene image. Another important reason is that the information of each layer of the convolutional network is not fully utilized, the feature fusion ability is not strong enough to deal with the detection of ships in multiple scenes including sea and offshore, and especially the interference of complex offshore background cannot be ruled out. SED improves the method in these two aspects, and conducts experiments on the public SAR ship detection data set. The detection accuracy index AP of SED reaches 94.2%, which, compared with the classic deep learning detector, has exceeded the best RetineNet model by 1.3%, and achieved a balance among model size, computing power consumption and detection speed. This verifies that the model can achieve excellent performance in multi-scale SAR image ship detection in multiple scenes.

     

  • loading
  • [1]
    KANJIR U, GREIDANUS H, KRIS ˇTOF O. Vessel detection and classification from spaceborne optical images: A literature survey[J]. Remote Sensing of Envioronment, 2018, 207: 1-26. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5877374/
    [2]
    WANG Y, WANG C, ZHANG H, et al. Automatic ship detection based on retinanet using multi-resolution gaofen-3 imagery[J]. Remote Sensing, 2019, 11(5): 531. doi: 10.3390/rs11050531
    [3]
    EL-DARYMLI K, GILL E W, MCGUIRE P, et al. Automatic target recognition in synthetic aperture tadar imagery: A state-of-the-art review[J]. IEEE Access, 2016, 4: 6014-6058. doi: 10.1109/ACCESS.2016.2611492
    [4]
    YANG C S, PARK J H, RASHID A. An improved method of land masking for synthetic aperture radar-based ship detection[J]. Journal of Navigation, 2018, 71(4): 788-804. doi: 10.1017/S037346331800005X
    [5]
    MOLINA D E, GLEICH D, DATCU M, et al. Gibbs random field models for model-based despeckling of SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2009, 7(1): 73-77. http://ieeexplore.ieee.org/document/5071276
    [6]
    QIN X X, ZHOU S L, ZOU H X, et al. A CFAR detection algorithm for generalized gamma distributed background in high-resolution SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2013, 10(4): 806-810. doi: 10.1109/LGRS.2012.2224317
    [7]
    ZHAO J, ZHANG Z, YU W, et al. A cascade coupled convolutional neural network guided visual attention method for ship detection from SAR images[J]. IEEE Access, 2018, 6: 50693-50708. doi: 10.1109/ACCESS.2018.2869289
    [8]
    LI J, QU C, SHAO J. Ship detection in SAR images based on an improved faster R-CNN[C]//2017 SAR in Big Data Era: Models, Methods and Applications. Piscataway: IEEE Press, 2017: 1-6.
    [9]
    WANG Y, WANG C, ZHANG H, et al. A SAR dataset of ship detection for deep learning under complex backgrounds[J]. Remote Sensing, 2019, 11(7): 765. doi: 10.3390/rs11070765
    [10]
    LIU W, ANGUELOV D, ERHAN D, et al. SSD: Single shot multiBox detector[C]//European Conference on Computer Vision. Berlin: Springer, 2016: 21-37.
    [11]
    REN S, HE K, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[C]//Advances in Neural Information Processing Systems, 2015: 91-99.
    [12]
    LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]//2017 IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2017: 2999-3007.
    [13]
    NOGUEIRA K, PENATTI O A, DOS SANTOS J A. Towards better exploiting convolutional neural networks for remote sensing scene classification[J]. Pattern Recognition, 2017, 61: 539-556. doi: 10.1016/j.patcog.2016.07.001
    [14]
    SHEN Z, LIU Z, LI J, et al. DSOD: Learning deeply supervised object detectors from scratch[C]//2017 IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2017: 1919-1927.
    [15]
    ZHU R, ZHANG S, WANG X, et al. ScratchDet: Training single-shot object detectors from scratch[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2019: 2268-2277.
    [16]
    LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2017: 2117-2125.
    [17]
    HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2016: 770-778.
    [18]
    TAN M, PANG R, LE Q V. EfficientDet: Scalable and efficient object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2020: 10781-10790.
    [19]
    GIRSHICK R. Fast R-CNN[C]//2015 IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2015: 1440-1448.
    [20]
    HE K, GKIOXARI G, DOLLÁR P, et al. Mask R-CNN[C]//2017 IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2017: 2961-2969.
    [21]
    REDMON J, FARHADI A. YOLOv3: An incremental improvement[EB/OL]. [2020-05-13]. https://arxiv.org/abs/1804.02767.
    [22]
    TIAN Z, SHEN C, CHEN H, et al. FCOS: Fully convolutional one-stage object detection[C]//2019 IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2019: 9627-9636.
    [23]
    LIU S, QI L, QIN H, et al. Path aggregation network for instance segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2018: 8759-8768.
    [24]
    GHIASI G, LIN TY, LE Q V. NAS-FPN: Learning scalable feature pyramid architecture for object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2019: 7036-7045.
    [25]
    IOFFE S, SZEGEDY C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[C]//Proceedings of the 32nd International Conference on International Conference on Machine Learning. New York: ACM Press, 2015: 448-456.
    [26]
    WU Y, HE K. Group normalization[C]//European Conference on Computer Vision. Berlin: Springer, 2018: 3-19.
    [27]
    HOWARD A G, ZHU M, CHEN B, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications[EB/OL]. [2020-05-13]. https://arxiv.org/abs/1704.04861.
    [28]
    SANDLER M, HOWARD A, ZHU M, et al. MobileNetV2: Inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2018: 4510-4520.
    [29]
    CHEN X, FANG H, LIN T, et al. Microsoft COCO captions: Data collection and evaluation server[J]. (2015-04-03)[2020-05-13]. https://arxiv.org/abs/1504.00325.
    [30]
    KINGMA D P, BA J. Adam: A method for stochastic optimization[EB/OL]. (2017-01-30)[2020-05-13]. https://arxiv.org/abs/1412.6980v9.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(7)  / Tables(2)

    Article Metrics

    Article views(352) PDF downloads(171) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return