Volume 40 Issue 9
Sep.  2014
Turn off MathJax
Article Contents
Zhang Yang, Wang Xinlong. Landmark fixed high-precision binocular visual navigation method[J]. Journal of Beijing University of Aeronautics and Astronautics, 2014, 40(9): 1305-1311. doi: 10.13700/j.bh.1001-5965.2013.0588(in Chinese)
Citation: Zhang Yang, Wang Xinlong. Landmark fixed high-precision binocular visual navigation method[J]. Journal of Beijing University of Aeronautics and Astronautics, 2014, 40(9): 1305-1311. doi: 10.13700/j.bh.1001-5965.2013.0588(in Chinese)

Landmark fixed high-precision binocular visual navigation method

doi: 10.13700/j.bh.1001-5965.2013.0588
  • Received Date: 15 Oct 2013
  • Publish Date: 20 Sep 2014
  • Autonomous navigation is a crucial technology in the field of unmanned aerial vehicle (UAV), intelligent robots and smartcars. To overcome shortages in traditional visual navigation methods, a landmark fixed high-precision binocular visual autonomous navigation method, based on the camera egomotion estimation theory, was put forward. In this method, camera's egomotion parameters, velocity and angular velocity, were measured from the variance between frames of the continuing image sequence, then the position and attitude was calculated by accumulating the velocity and angular velocity. Moreover, by bringing in the absolute positioning information provided by the landmarks, the performance of the method in long time navigation was improved. Combining the absolute positioning information from the landmarks and the relative navigation information from the binocular vision, as well as modeling and decoupling the measurement noise of the binocular vision, the accumulation and increasing of the navigation error was suppressed. Simulation result shows that this method has the advantage of high precision, autonomy and completeness of navigation information.

     

  • loading
  • [1]
    Ribas D,Ridao P,Neira J.Underwater SLAM for structured environments using an imaging sonar[M].Berlin:Springer,2010
    [2]
    Lee D,Kim G,Kim D,et al.Vision-based object detection and tracking for autonomous navigation of underwater robots[J].Ocean Engineering,2012,48:59-68
    [3]
    Sasiadek J Z,Walker M J.Vision-based UAV navigation[C]//AIAA Guidance,Navigation and Control Conference and Exhibit.Reston,VA:AIAA,2008:18-21
    [4]
    介鸣,黄显林.基于月貌匹配的视觉导航方法[J].哈尔滨工程大学学报,2007,28(1):11-30 Jie Ming,Huang Xianlin.Vision navigation based on lunar terrain matching[J].Journal of Harbin Engineering University,2007,28(1):11-30(in Chinese)
    [5]
    Isher M K,Xu H,Long H.Robustness towards application of multi-objective optimisation for autonomous off-road vehicle on rough terrain[J].International Journal of Mechatronics and Aut- omation,2012,2(2):103-111
    [6]
    Chatterjee A,Rakshit A,Singh N N.Vision-based mobile robot navigation using subgoals[M].Berlin:Springer,2013:47-82
    [7]
    黄显林,姜肖楠,卢鸿谦,等.自主视觉导航方法综述[J].吉林大学学报:信息科学版,2010(2):158-165 Huang Xianlin,Jiang Xiaonan,Lu Hongqian,et al.Survey of vision for autonomous navigation[J].Journal of Jilin University:Information Science Edition,2010(2):158-165(in Chinese)
    [8]
    Li X,Wang J.Image matching techniques for vision-based indoor navigation systems: performance analysis for 3D map based approach[C]//2012 International Conference on Indoor Positioning and Indoor Navigation, IPIN 2012-Conference Proceedings.Washington,DC:IEEE Computer Society,2012:1-8
    [9]
    Engelhard N,Endres F,Hess J,et al.Real-time 3D visual SLAM with a hand-held RGB-D camera[C]//Proc of the RGB-D Workshop on 3D Perception in Robotics at the European Robotics Forum,2011
    [10]
    李宇波,朱效洲,卢惠民,等.视觉里程计技术综述[J].计算机应用研究,2012,29(8):2801-2805 Li Yubo,Zhu Xiaozhou,Lu Huimin,et al.Review on visual odometry technology[J].Application Research of Computers,2012,29(8):2801-2805(in Chinese)
    [11]
    Burger W,Bhanu B.Estimating 3-D egomotion from perspective image sequences[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1990,12(11):1040-1058
    [12]
    Perrone J A.A closer look at the visual input to self-motion estimation[J].Motion Vision-Computational,Neural,and Ecological Constraints,2001:169-179
    [13]
    徐德,谭民,李原.机器人视觉测量与控制[M].北京:国防工业出版社,2011 Xu De,Tan Min,Li Yuan.Visual measurement and control for robots[M].Beijing:National Defense Industry Press,2011(in Chinese)
    [14]
    Kelly J,Saripalli S,Sukhatme G S.Combined visual and inertial navigation for an unmanned aerial vehicle[C]//Springer Tracts in Advanced Robotics.Heidelberg:Springer Verlag,2008,42:255-264
    [15]
    张广军.视觉测量[M].北京:科学出版社,2008 Zhang Guangjun.Visual measurement[M].Beijing:Science Press,2008(in Chinese)
    [16]
    段战胜,韩崇昭.相关量测噪声情况下多传感器集中式融合跟踪[J].系统工程与电子技术,2005,27(7):1160-1163 Duan Zhansheng,Han Chongzhao.Multi-sensor centralized fusion tracking with correlated measurement noises[J].Systems Engineering and Electronics,2005,27(7):1160-1163(in Chinese)

  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views(1099) PDF downloads(873) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return