留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于非监督对比学习的火星地形特征提取方法

杨博 魏翔 于贺 刘超凡

杨博,魏翔,于贺,等. 基于非监督对比学习的火星地形特征提取方法[J]. 北京航空航天大学学报,2024,50(6):1842-1849 doi: 10.13700/j.bh.1001-5965.2022.0525
引用本文: 杨博,魏翔,于贺,等. 基于非监督对比学习的火星地形特征提取方法[J]. 北京航空航天大学学报,2024,50(6):1842-1849 doi: 10.13700/j.bh.1001-5965.2022.0525
YANG B,WEI X,YU H,et al. Martian terrain feature extraction method based on unsupervised contrastive learning[J]. Journal of Beijing University of Aeronautics and Astronautics,2024,50(6):1842-1849 (in Chinese) doi: 10.13700/j.bh.1001-5965.2022.0525
Citation: YANG B,WEI X,YU H,et al. Martian terrain feature extraction method based on unsupervised contrastive learning[J]. Journal of Beijing University of Aeronautics and Astronautics,2024,50(6):1842-1849 (in Chinese) doi: 10.13700/j.bh.1001-5965.2022.0525

基于非监督对比学习的火星地形特征提取方法

doi: 10.13700/j.bh.1001-5965.2022.0525
详细信息
    通讯作者:

    E-mail:yangbo@buaa.edu.cn

  • 中图分类号: V448

Martian terrain feature extraction method based on unsupervised contrastive learning

More Information
  • 摘要:

    火星表面地形智能识别对火星车自主探测具有重要意义,火星地形图像的特征提取方法目前主要分为传统的浅层视觉特征提取和基于监督学习的深层特征提取2类。找回丢失图像信息、获取大量带标签数据是要解决的关键问题。为此,提出一种基于非监督对比学习的火星地形特征识别方法,通过建立图像字典数据集,用“问询”和“编码”2组神经网络分别将单个图像与“字典”数据集中其他图像进行对比,用相似度泛函作为损失函数对网络进行训练,从而实现对火星地形图像的特征识别。所提方法还具有对训练数据集之外的新类型地形图像识别能力,后续识别分类优越性突出。仿真结果表明:所提方法识别准确率为85.4%,对新类型地形图像的识别准确率为84.5%。

     

  • 图 1  火星典型地形图像熵分布

    Figure 1.  Entropy of typical Martian terrain images distribution

    图 2  地形图像分割

    Figure 2.  Terrain image segmentation

    图 3  增强处理示意图

    Figure 3.  Schematic diagram of enhancement processing

    图 4  对比学习算法实施思路

    Figure 4.  Implementation ideas of contrastive learning algorithm

    图 5  仿真验证设计

    Figure 5.  Design of simulation verification

    图 6  神经网络卷积层特征输出图

    Figure 6.  Convolution layers characteristic output images of neural network

    图 7  分类识别混淆矩阵

    Figure 7.  Confusion matrix of classification and recognition

    图 8  不同参数下分类识别准确率

    Figure 8.  Classification and recognition accuracy under different parameters

    图 9  不同方法下地形图像识别准确率

    Figure 9.  Recognition accuracy of terrain image under different methods

    图 10  火星远景及多种类型地形图像

    Figure 10.  Martian vista and various terrain images

    图 11  添加新一类地形图像分类混淆矩阵

    Figure 11.  Classification confusion matrix with a new class of terrain image

    图 12  输出特征维度对识别准确率的影响

    Figure 12.  Influence of output feature dimension on recognition accuracy

    表  1  5类火星地形示例

    Table  1.   Examples of five types of Martian terrain

    类别 典型图 影响
    砂涟漪 导致车轮陷入
    沙坑无法运转
    尖锐岩石 导致车轮破损
    细砂 粒度小,
    车轮滑转率小
    粗砂 粒度大,
    车轮滑转率大
    基岩 硬度大,可通行,
    易损坏车轮
    下载: 导出CSV

    表  2  火星典型地形图像熵

    Table  2.   Entropy of typical Martian terrain images

    类别均值方差
    砂涟漪6.32110.4777
    尖锐岩石6.60110.2715
    细砂7.13540.0088
    粗砂7.19620.0661
    基岩6.49620.1772
    下载: 导出CSV
  • [1] OJHA L, WILHELM M, MURCHIE S, et al. Spectral evidence for hydrated salts in recurring slope lineae on Mars[J]. Nature Geoscience, 2015, 8(11): 829-832.
    [2] 张洪华, 梁俊, 黄翔宇, 等. 嫦娥三号自主避障软着陆控制技术[J]. 中国科学:技术科学, 2014, 44(6): 559-568. doi: 10.1360/092014-51

    ZHANG H H, LIANG J, HUANG X Y, et al. Autonomous hazard avoidance control for Chang’E-3 soft landing[J]. Scientia Sinica (Technologica), 2014, 44(6): 559-568 (in Chinese). doi: 10.1360/092014-51
    [3] LEE S J, CHEN T L, YU L, et al. Image classification based on the boost convolutional neural network[J]. IEEE Access, 2018, 6: 12755-12768. doi: 10.1109/ACCESS.2018.2796722
    [4] JIAO L C, ZHANG F, LIU F, et al. A survey of deep learning-based object detection[J]. IEEE Access, 2019, 7: 128837-128868. doi: 10.1109/ACCESS.2019.2939201
    [5] JU J, JUNG H, OH Y, et al. Extending contrastive learning to unsupervised coreset selection[J]. IEEE Access, 2022, 10: 7704-7715. doi: 10.1109/ACCESS.2022.3142758
    [6] SUN Q R, LIU Y Y, CHUA T S, et al. Meta-transfer learning for few-shot learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2019: 403-412.
    [7] SHORTEN C, KHOSHGOFTAAR T M. A survey on image data augmentation for deep learning[EB/OL]. (2022-04-19)[2022-04-23]. https://arxiv.org/abs/2204.08610.
    [8] BOWLES C, CHEN L, GUERRERO R, et al. GAN Augmentation: Augmenting training data using generative adversarial networks[EB/OL]. (2018-10-25)[2022-04-23]. https://arxiv.org/abs/1810.10863.
    [9] FRID-ADAR M, BEN-COHEN A, AMER R, et al. Improving the segmentation of anatomical structures in chest radiographs using U-net with an ImageNet pre-trained encoder[C]//Proceedings of the International Workshop on Reconstruction and Analysis of Moving Body Organs, International Workshop on Breast Image Analysis, International Workshop on Thoracic Image Analysis. Berlin: Springer, 2018: 159-168.
    [10] CUI B G, CHEN X, LU Y. Semantic segmentation of remote sensing images using transfer learning and deep convolutional neural network with dense connection[J]. IEEE Access, 2020, 8: 116744-116755. doi: 10.1109/ACCESS.2020.3003914
    [11] GENG J, DENG X Y, MA X R, et al. Transfer learning for SAR image classification via deep joint distribution adaptation networks[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(8): 5377-5392. doi: 10.1109/TGRS.2020.2964679
    [12] LI H F, CUI Z Q, ZHU Z Q, et al. RS-MetaNet: Deep metametric learning for few-shot remote sensing scene classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(8): 6983-6994. doi: 10.1109/TGRS.2020.3027387
    [13] VU T H, JAIN H, BUCHER M, et al. DADA: Depth-aware domain adaptation in semantic segmentation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE Press, 2019: 7363-7372.
    [14] JING L L, TIAN Y L. Self-supervised visual feature learning with deep neural networks: A survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(11): 4037-4058. doi: 10.1109/TPAMI.2020.2992393
    [15] MISRA I, VAN DER MAATEN L. Self-supervised learning of pretext-invariant representations[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2020: 6706-6716.
    [16] JAISWAL A, BABU A R, ZADEH M Z, et al. A survey on contrastive self-supervised learning[J]. Technologies, 2020, 9(1): 2. doi: 10.3390/technologies9010002
    [17] HJELM R D, FEDOROV A, LAVOIE-MARCHILDON S, et al. Learning deep representations by mutual information estimation and maximization[EB/OL]. (2018-08-20)[2022-04-23]. https://arxiv.org/abs/1808.06670v3.
    [18] HÉNAFF O J, SRINIVAS A, DE FAUW J, et al. Data-efficient image recognition with contrastive predictive coding[EB/OL]. (2019-03-22)[2022-04-24].https://arxiv.org/abs/1905.09272.
    [19] WU Z R, XIONG Y J, YU S X, et al. Unsupervised feature learning via non-parametric instance discrimination[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2018: 3733-3742.
    [20] CHEN T, KORNBLITH S, NOROUZI M, et al. A simple framework for contrastive learning of visual representations[C]//Proceedings of the 37th International Conference on Machine Learning. New York: ACM, 2020: 1597–1607.
    [21] HE K M, FAN H Q, WU Y X, et al. Momentum contrast for unsupervised visual representation learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2020: 9726-9735.
    [22] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2016: 770-778.
  • 加载中
图(12) / 表(2)
计量
  • 文章访问数:  136
  • HTML全文浏览量:  78
  • PDF下载量:  10
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-06-22
  • 录用日期:  2022-09-02
  • 网络出版日期:  2022-09-22
  • 整期出版日期:  2024-06-27

目录

    /

    返回文章
    返回
    常见问答