Volume 50 Issue 2
Feb.  2024
Turn off MathJax
Article Contents
WANG P,HAO W L,NI C,et al. An overview of visual SLAM methods[J]. Journal of Beijing University of Aeronautics and Astronautics,2024,50(2):359-367 (in Chinese) doi: 10.13700/j.bh.1001-5965.2022.0376
Citation: WANG P,HAO W L,NI C,et al. An overview of visual SLAM methods[J]. Journal of Beijing University of Aeronautics and Astronautics,2024,50(2):359-367 (in Chinese) doi: 10.13700/j.bh.1001-5965.2022.0376

An overview of visual SLAM methods

doi: 10.13700/j.bh.1001-5965.2022.0376
Funds:  China Postdoctoral Science Foundation (2021M702030); Science and Technology Programe of Transportation Department of Shandong Province (2021B120)
More Information
  • Corresponding author: E-mail:emilync@126.com
  • Received Date: 18 May 2022
  • Accepted Date: 23 Jun 2022
  • Available Online: 31 Oct 2022
  • Publish Date: 10 Oct 2022
  • Simultaneous localization and mapping (SLAM) enables mobile robots to calculate their position and pose by independently building an environment model during movement without any environmental prior conditions by carrying specific sensors. It can greatly improve the autonomous navigation ability of mobile robots and their adaptability to different application environments, and contribute to the subsequent implementation of dynamic path planning, real-time obstacle avoidance and multi-robot collaboration. Visual SLAM refers to using the camera as an external sensor to collect ambient information to create a map and estimate the robot’s own position in real-time. The study describes and examines the various feature detection approaches, back-end optimization, loop closure detection, and the application of visual SLAM in a dynamic environment in addition to introducing the standard classical visual SLAM methods and the visual SLAM methods mixed with deep learning. This study addresses the current state-of-the-art in research and the potential growth of visual SLAM in the future before summarizing the issues with visual SLAM raised here.

     

  • loading
  • [1]
    权美香, 朴松昊, 李国. 视觉SLAM综述[J]. 智能系统学报, 2016, 11(6): 768-776.

    QUAN M X, PIAO S H, LI G. An overview of visual SLAM[J]. CAAI Transactions on Intelligent Systems, 2016, 11(6): 768-776(in Chinese).
    [2]
    DAVISON A J, REID I D, MOLTON N D, et al. MonoSLAM: Real-time single camera SLAM[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6): 1052-1067. doi: 10.1109/TPAMI.2007.1049
    [3]
    KLEIN G, MURRAY D. Parallel tracking and mapping for small AR workspaces[C]//Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. Piscataway: IEEE Press, 2007: 225-234.
    [4]
    ROSTEN E. Machine learning for very high-speed corner detection[C]//Proceedings of the European Conference on Computer Vision. Berlin: Springer, 2006.
    [5]
    NEWCOMBE R A, LOVEGROVE S J, DAVISON A J. DTAM: Dense tracking and mapping in real-time[C]//Proceedings of the International Conference on Computer Vision. Piscataway: IEEE Press, 2011: 2320-2327.
    [6]
    ENGEL J, SCHÖPS T, CREMERS D. LSD-SLAM: Large-scale direct monocular SLAM[C]//Proceedings of the European Conference on Computer Vision. Berlin: Springer, 2014: 834-849.
    [7]
    FORSTER C, PIZZOLI M, SCARAMUZZA D. SVO: Fast semi-direct monocular visual odometry[C]//Proceedings of the IEEE International Conference on Robotics and Automation. Piscataway: IEEE Press, 2014: 15-22.
    [8]
    MUR-ARTAL R, MONTIEL J M M, TARDOS J D. ORB-SLAM: A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163. doi: 10.1109/TRO.2015.2463671
    [9]
    MUR-ARTAL R, TARDÓS J D. ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262. doi: 10.1109/TRO.2017.2705103
    [10]
    CAMPOS C, ELVIRA R, RODRÍGUEZ J J G, et al. ORB-SLAM3: An accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890. doi: 10.1109/TRO.2021.3075644
    [11]
    RUBLEE E, RABAUD V, KONOLIGE K, et al. ORB: An efficient alternative to SIFT or SURF[C]//Proceedings of the International Conference on Computer Vision. Piscataway: IEEE Press, 2011: 2564-2571.
    [12]
    ŞIMŞEK B, SATIR S, BILGE H Ş. Performance comparison of direct and feature based vSLAM algorithms[C]//Proceedings of the 29th Signal Processing and Communications Applications Conference. Piscataway: IEEE Press, 2021: 1-4.
    [13]
    ENGEL J, KOLTUN V, CREMERS D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(3): 611-625.
    [14]
    LI R, WANG S, LONG Z, et al. UnDeepVO: Monocular visual odometry through unsupervised deep learning[C]//Proceedings of the IEEE International Conference on Robotics and Automation. Piscataway: IEEE Press, 2018: 7286-7291.
    [15]
    BESCOS B, FÁCIL J M, CIVERA J, et al. DynaSLAM: Tracking, mapping, and inpainting in dynamic scenes[J]. IEEE Robotics and Automation Letters, 2018, 3(4): 4076-4083. doi: 10.1109/LRA.2018.2860039
    [16]
    KANG S, GAO Y, LI K, et al. A visual SLAM algorithm based on dynamic feature point filtering[C]//Proceedings of the IEEE International Conference on Robotics and Biomimetics. Piscataway: IEEE Press, 2021: 1968-1973.
    [17]
    WANG J, RÜNZ M, AGAPITO L. DSP-SLAM: Object oriented SLAM with deep shape priors[C]//Proceedings of the International Conference on 3D Vision. Piscataway: IEEE Presss, 2021: 1362-1371.
    [18]
    SHI J. Good features to track[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 1994: 593-600.
    [19]
    LOWE D G. Object recognition from local scale-invariant features[C]//Proceedings of the International Conference on Computer Vision. Piscataway: IEEE Press, 1999, 2: 1150-1157.
    [20]
    BAY H, TUYTELAARS T, GOOL L V. SURF: Speeded up robust features[C]//Proceedings of the European Conference on Computer Vision. Berlin: Springer, 2006: 404-417.
    [21]
    SUN C Z, ZHANG B, WANG J K, et al. A review of visual SLAM based on unmanned systems[C]//Proceedings of the 2nd International Conference on Artificial Intelligence and Education. Piscataway: IEEE Press, 2021: 226-234.
    [22]
    JIAN M, WANG J, YU H, et al. Visual saliency detection by integrating spatial position prior of object with background cues[J]. Expert Systems with Applications, 2021, 168: 114219. doi: 10.1016/j.eswa.2020.114219
    [23]
    汤一明, 刘玉菲, 黄鸿. 视觉单目标跟踪算法综述[J]. 测控技术, 2020, 39(8): 21-34. doi: 10.19708/j.ckjs.2020.08.003

    TANG Y M, LIU Y F, HUANG H. Overview of visual single target tracking algorithm[J]. Measurement and Control Technology, 2020, 39(8): 21-34(in Chinese). doi: 10.19708/j.ckjs.2020.08.003
    [24]
    KONDA K R, MEMISEVIC R. Learning visual odometry with a convolutional network[C]//Proceedings of the International Conference on Computer Vision Theory and Application. Setubal: Science and Technology Publications, 2015: 486-490.
    [25]
    HOU Y, ZHANG H, ZHOU S. Convolutional neural network-based image representation for visual loop closure detection[C]//Proceedings of the IEEE International Conference on Information and Automation. Piscataway: IEEE Press, 2015: 2238-2245.
    [26]
    ZHANG X, SU Y, ZHU X. Loop closure detection for visual SLAM systems using convolutional neural network[C]//Proceedings of the 23rd International Conference on Automation and Computing. Piscataway: IEEE Press, 2017: 1-6.
    [27]
    QIN T, CAO S, PAN J, et al. A general optimization-based framework for global pose estimation with multiple sensors[EB/OL]. (2019-01-11)[2022-05-01].https://arxiv.org/abs/1901.03642.
    [28]
    GAUTAM A, MAHANGADE S, GUPTA V I, et al. An experimental comparison of visual SLAM systems[C]//Proceedings of the International Symposium of Asian Control Association on Intelligent Robotics and Industrial Automation. Piscataway: IEEE Press, 2021: 13-18.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(2)

    Article Metrics

    Article views(1075) PDF downloads(67) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return