留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

智能化舰船要害检测、轨迹预测与位姿估计算法

李晨瑄 李湉雨 李梓正 曾维贵 胥辉旗

李晨瑄,李湉雨,李梓正,等. 智能化舰船要害检测、轨迹预测与位姿估计算法[J]. 北京航空航天大学学报,2023,49(2):444-456 doi: 10.13700/j.bh.1001-5965.2021.0253
引用本文: 李晨瑄,李湉雨,李梓正,等. 智能化舰船要害检测、轨迹预测与位姿估计算法[J]. 北京航空航天大学学报,2023,49(2):444-456 doi: 10.13700/j.bh.1001-5965.2021.0253
LI C X,LI T Y,LI Z Z,et al. Intelligent algorithm of warship’s vital parts detection, trajectory prediction and pose estimation[J]. Journal of Beijing University of Aeronautics and Astronautics,2023,49(2):444-456 (in Chinese) doi: 10.13700/j.bh.1001-5965.2021.0253
Citation: LI C X,LI T Y,LI Z Z,et al. Intelligent algorithm of warship’s vital parts detection, trajectory prediction and pose estimation[J]. Journal of Beijing University of Aeronautics and Astronautics,2023,49(2):444-456 (in Chinese) doi: 10.13700/j.bh.1001-5965.2021.0253

智能化舰船要害检测、轨迹预测与位姿估计算法

doi: 10.13700/j.bh.1001-5965.2021.0253
详细信息
    作者简介:

    李晨瑄等:智能化舰船要害检测、轨迹预测与位姿估计算法 13

    通讯作者:

    E-mail:lccxmail@163.com

  • 中图分类号: V243.5;TP751.1

Intelligent algorithm of warship’s vital parts detection, trajectory prediction and pose estimation

More Information
  • 摘要:

    准确检测与打击舰船要害部位可有效提升反舰导弹毁伤效能。针对舰船要害部位检测精度低、导引误差解算精度不足等问题,提出基于深度学习的舰船要害关键点检测、轨迹预测与导引头位姿估计算法。融合深层语义信息与浅层定位信息,采用SoftPool池化保留细粒度特征,提升多角度多尺度舰船要害部位检测精度;将关键点检测结果与舰船空间结构建立映射,解算导引头三维位姿;引入长短期记忆网络挖掘要害打击点时空特征,实现多尺度舰船要害动态轨迹预测。实验结果表明:所提算法对舰船要害部位检测与轨迹预测精度高,导引头位姿估计结果较准确,满足自主突防视角反舰导弹对复杂海战场的态势感知需求。

     

  • 图 1  所提算法流程

    Figure 1.  Proposed algorithm flow chart

    图 2  SHKP-LSTM算法结构

    Figure 2.  Structure of SHKP-LSTM algorithm

    图 3  SoftPool原理

    Figure 3.  Principle of SoftPool

    图 4  LSTM元胞结构

    Figure 4.  Structure of LSTM cell

    图 5  舰船要害关键点

    Figure 5.  Key-points of warship

    图 6  舰船坐标系

    Figure 6.  Coordinate system of warship

    图 7  隐藏层节点数测试

    Figure 7.  Testing on hidden layer node

    图 8  输入序列长度测试

    Figure 8.  Length testing on input sequence

    图 9  舰船关键点检测结果

    Figure 9.  Detection results of warship’s key-points

    图 10  损失函数曲线

    Figure 10.  Curve on loss function

    图 11  轨迹预测结果

    Figure 11.  Results on trajectory prediction

    图 12  轨迹预测细节分析

    Figure 12.  Detailed analysis of trajectory prediction

    表  1  实验环境

    Table  1.   Experimental environment

    参数配置信息
    CPUAMD Ryzen 9 3900X
    CPU显存32 GB
    GPUGEFORCE RTX 2080Ti
    GPU显存11 GB
    IDEPycharm、gedit、vim
    系统Ubuntu 16.04 LTS
    语言Python
    加速环境CUDA10.0,CuDNN7.6
    深度学习框架Pytorch1.0
    下载: 导出CSV

    表  2  舰船关键点测试结果

    Table  2.   Test results of warship’s key-points

    算法mAP/%召回率/%检测速度/FPS参数量/106模型规模/106
    ResNet[18]63.867.33315.8265.11
    Res-DCN[13]63.466.13514.4358.26
    DLA3481.886.72618.1675.76
    CenterNet-
    DLA[19]
    84.488.02920.1780.64
    Hourglass[20]87.490.913191.25779.88
    所提算法87.791.12720.1787.2
    下载: 导出CSV

    表  3  不同池化方式测试结果

    Table  3.   Test results of different poolings

    池化方式mAP/%检测速度/FPS
    最大值池化84.429
    随机采样池化85.428
    空间金字塔池化85.929
    SoftPool池化87.727
    下载: 导出CSV

    表  4  位姿估计测试结果

    Table  4.   Test results of pose estimation

    图像坐标真值/cm计算值/cmMAE
    /cm
    MSE
    /cm
    旋转角/(°)
    图9(a)(−80,70,23)(−82.23,
    73.89,
    26.40)
    3.893.25−115,−45,−166
    图9(b)(82,83,40)(79.23,79.86,
    39.99)
    3.142.417−116,43,166
    图9(c)(42,103,40)(40.14,99.62,
    35.86)
    4.143.267−110,14,170
    下载: 导出CSV

    表  5  轨迹预测算法对比

    Table  5.   Comparison of trajectory prediction algorithms

    算法ADE/像素FDE/像素
    关键点检测真值0.303 21.450 6
    Kalman Filter1.139 82.671 2
    ARIMA0.963 52.045 6
    LSTM0.326 31.632 5
    下载: 导出CSV
  • [1] 余瑞星, 吴虞霖, 曹萌, 等. 基于边缘与角点相结合的目标提取与匹配算法[J]. 西北工业大学学报, 2017, 35(4): 586-590. doi: 10.3969/j.issn.1000-2758.2017.04.005

    YU R X, WU Y L, CAO M, et al. Target extraction and image matching algorithm based on combination of edge and corner[J]. Journal of Northwestern Polytechnical University, 2017, 35(4): 586-590(in Chinese). doi: 10.3969/j.issn.1000-2758.2017.04.005
    [2] 苏娟, 杨龙, 黄华, 等. 用于SAR图像小目标舰船检测的改进SSD算法[J]. 系统工程与电子技术, 2020, 42(5): 1026-1034. doi: 10.3969/j.issn.1001-506X.2020.05.08

    SU J, YANG L, HUANG H, et al. Improved SSD algorithm for small-sized SAR ship detection[J]. Systems Engineering and Electronics, 2020, 42(5): 1026-1034(in Chinese). doi: 10.3969/j.issn.1001-506X.2020.05.08
    [3] GAO F, HE Y S, WANG J, et al. Anchor-free convolutional network with dense attention feature aggregation for ship detection in SAR images[J]. Remote Sensing, 2020, 12(16): 2619. doi: 10.3390/rs12162619
    [4] TANG G, LIU S B, FUJINO I, et al. H-YOLO: A single-shot ship detection approach based on region of interest preselected network[J]. Remote Sensing, 2020, 12(24): 4192. doi: 10.3390/rs12244192
    [5] 王玺坤, 姜宏旭, 林珂玉. 基于改进型YOLO算法的遥感图像舰船检测[J]. 北京航空航天大学学报, 2020, 46(6): 1184-1191. doi: 10.13700/j.bh.1001-5965.2019.0394

    WANG X K, JIANG H X, LIN K Y. Remote sensing image ship detection based on modified YOLO algorithm[J]. Journal of Beijing University of Aeronautics and Astronautics, 2020, 46(6): 1184-1191(in Chinese). doi: 10.13700/j.bh.1001-5965.2019.0394
    [6] ZHANG Y L, GUO L H, WANG Z F, et al. Intelligent ship detection in remote sensing images based on multi-layer convolutional feature fusion[J]. Remote Sensing, 2020, 12(20): 3316. doi: 10.3390/rs12203316
    [7] 刘勇, 李杰, 张建林, 等. 基于深度学习的二维人体姿态估计研究进展[J]. 计算机工程, 2021, 47(3): 1-16. doi: 10.19678/j.issn.1000-3428.0058799

    LIU Y, LI J, ZHANG J L, et al. Research progress of two-dimensional human pose estimation based on deep learning[J]. Computer Engineering, 2021, 47(3): 1-16(in Chinese). doi: 10.19678/j.issn.1000-3428.0058799
    [8] TOSHEV A, SZEGEDY C. DeepPose: Human pose estimation via deep neural networks[C]// IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2014 : 1653-1660.
    [9] TOMPSON J, JAIN A, LECUN Y, et al. Joint training of a convolutional network and a graphical model for human pose estimation[C]//Proceedings of the Conference on Neural Information Processing Systems. New York: Curran Associates Press, 2014: 1799-1807.
    [10] WEI S H, RAMAKRISHNA V, KANADE T, et al. Convolutional pose machines[C]// IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2016: 4724-4732.
    [11] 陈宗海, 裴浩渊, 王纪凯, 等. 基于单目相机的视觉重定位方法综述[J]. 机器人, 2021, 43(3): 373-384. doi: 10.13973/j.cnki.robot.200350

    CHEN Z H, PEI H Y, WANG J K, et al. Survey of monocular camera-based visual relocalization[J]. Robot, 2021, 43(3): 373-384(in Chinese). doi: 10.13973/j.cnki.robot.200350
    [12] ALAHI A, GOEL K, RAMANATHAN V, et al. Social LSTM: Human trajectory prediction in crowded spaces[C]// IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2016: 961-971.
    [13] ZHOU X Y, WANG D Q, KRÄHENBÜHL P. Objects as points[EB/OL]. (2019-04-16)[2021-04-26]. https://arxiv.org/abs/1904.07850.
    [14] STERGIOU A, POPPE R, KALLIATAKIS G. Refining activation downsampling with SoftPool[C]// IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway: IEEE Press, 2021 : 10337-10346.
    [15] ZENG W L, QUAN Z B, ZHAO Z Y, et al. A deep learning approach for aircraft trajectory prediction in terminal airspace[J]. IEEE Access, 2020, 8: 151250-151266. doi: 10.1109/ACCESS.2020.3016289
    [16] 黄洁, 姜志国, 张浩鹏, 等. 基于卷积神经网络的遥感图像舰船目标检测[J]. 北京航空航天大学学报, 2017, 43(9): 1841-1848. doi: 10.13700/j.bh.1001-5965.2016.0755

    HUANG J, JIANG Z G, ZHANG H P, et al. Ship object detection in remote sensing images using convolutional neural networks[J]. Journal of Beijing University of Aeronautics and Astronautics, 2017, 43(9): 1841-1848(in Chinese). doi: 10.13700/j.bh.1001-5965.2016.0755
    [17] 张雪松, 庄严, 闫飞, 等. 基于迁移学习的类别级物体识别与检测研究与进展[J]. 自动化学报, 2019, 45(7): 1224-1243. doi: 10.16383/j.aas.c180093

    ZHANG X S, ZHUANG Y, YAN F, et al. Status and development of transfer learning based category-level object recognition and detection[J]. Acta Automatica Sinica, 2019, 45(7): 1224-1243(in Chinese). doi: 10.16383/j.aas.c180093
    [18] XIAO B, WU H P, WEI Y C. Simple baselines for human pose estimation and tracking[C]//Proceedings of the Conference on European Conference on Computer Vision. Berlin: Springer, 2018: 472-487.
    [19] YU F, WANG D Q, SHELHAMER E, et al. Deep layer aggregation[C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2018 : 2403-2412.
    [20] NEWELL A, YANG K Y, DENG J. Stacked hourglass networks for human pose estimation[C]//Proceedings of the Conference on European Conference on Computer Vision. Berlin: Springer, 2016: 483-499.
    [21] 王新, 杨任农, 左家亮, 等. 基于HPSO-TPFENN的目标机轨迹预测[J]. 西北工业大学学报, 2019, 37(3): 612-620. doi: 10.3969/j.issn.1000-2758.2019.03.025

    WANG X, YANG R N, ZUO J L, et al. Trajectory prediction of target aircraft based on HPSO-TPFENN neural network[J]. Journal of Northwestern Polytechnical University, 2019, 37(3): 612-620(in Chinese). doi: 10.3969/j.issn.1000-2758.2019.03.025
  • 加载中
图(12) / 表(5)
计量
  • 文章访问数:  567
  • HTML全文浏览量:  117
  • PDF下载量:  52
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-05-14
  • 录用日期:  2021-06-25
  • 网络出版日期:  2021-07-05
  • 整期出版日期:  2023-02-28

目录

    /

    返回文章
    返回
    常见问答