留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于特征交叉检验的实时视觉里程计方法

范维思 尹继豪 袁丁 朱红梅

范维思, 尹继豪, 袁丁, 等 . 基于特征交叉检验的实时视觉里程计方法[J]. 北京航空航天大学学报, 2018, 44(11): 2444-2453. doi: 10.13700/j.bh.1001-5965.2018.0133
引用本文: 范维思, 尹继豪, 袁丁, 等 . 基于特征交叉检验的实时视觉里程计方法[J]. 北京航空航天大学学报, 2018, 44(11): 2444-2453. doi: 10.13700/j.bh.1001-5965.2018.0133
FAN Weisi, YIN Jihao, YUAN Ding, et al. A real-time visual odometry method based on crosscheck of feature[J]. Journal of Beijing University of Aeronautics and Astronautics, 2018, 44(11): 2444-2453. doi: 10.13700/j.bh.1001-5965.2018.0133(in Chinese)
Citation: FAN Weisi, YIN Jihao, YUAN Ding, et al. A real-time visual odometry method based on crosscheck of feature[J]. Journal of Beijing University of Aeronautics and Astronautics, 2018, 44(11): 2444-2453. doi: 10.13700/j.bh.1001-5965.2018.0133(in Chinese)

基于特征交叉检验的实时视觉里程计方法

doi: 10.13700/j.bh.1001-5965.2018.0133
详细信息
    作者简介:

    范维思  男, 硕士研究生。主要研究方向:计算机视觉

    尹继豪  男, 博士, 副教授, 博士生导师。主要研究方向:遥感图像处理、机器学习

    通讯作者:

    尹继豪, E-mail:yjh@buaa.edu.cn

  • 中图分类号: TP399

A real-time visual odometry method based on crosscheck of feature

More Information
  • 摘要:

    在自动驾驶和机器人导航系统中,里程计是用于持续获得系统姿态信息的一种装置。视觉里程计能以较低代价获得高精度的目标移动轨迹,基于特征的视觉里程计方法具有时间复杂度较低、计算速度快的优势,有助于数据实时处理。然而,传统基于特征的视觉里程计方法面临着2个技术瓶颈:特征匹配的准确度不足;姿态解算中目标函数的权重值有效性低。为了解决帧间特征匹配准确度不足的问题,本文提出特征交叉检验闭环匹配策略,即在传统单向闭环匹配的基础上,增加反向验证的过程,以获得匹配准确度更高的匹配点集合。该策略解决了传统特征匹配中使用单向闭环匹配策略鲁棒性不足、内点比例低的缺陷,提高了解算精度。同时在交叉检验匹配策略中利用前一时刻的运动信息缩小当前时刻特征匹配的搜索范围,降低特征点匹配的时间复杂度。针对目标函数的权重值有效性低的问题,本文将特征点在图像序列中的出现次数作为其生存周期,提出基于特征点生存周期的目标函数权值设置方法。在姿态解算中,特征点的生存周期可以有效反映其稳定性,使用其作为目标函数权值可以降低解算过程中的累积误差。本文在公开的KITTI数据集中进行算法测试,实验结果证明该方法可以实现高精度、实时的视觉里程计算。

     

  • 图 1  单向闭环匹配策略

    Figure 1.  Single-track 'circle' matching strategy

    图 2  特征交叉检验下的闭环匹配策略

    Figure 2.  Crosscheck of feature 'circle' matching strategy

    图 3  经过特征交叉检验后,在左、右立体图像对中得到的匹配点集的内点情况

    Figure 3.  An example of inlier in matching point set after crosscheck of feature between left image and right image

    图 4  特征交叉检验中每个步骤提高内点比例的效果

    Figure 4.  Improvements in inlier ratio of every crosscheck of feature step

    图 5  经过特征交叉检验后前、后帧图像匹配特征点及真值示例

    Figure 5.  An example of consecutive frame image matching point and truth value after crosscheck of feature

    图 6  采用ORB算子与采用本文归一化HARRIS描述符在内点比例

    Figure 6.  Inlier ratio by ORB operator and proposed normalized HARRIS descriptor

    图 7  各算法对KITTI数据集中第0组和第5组数据的运动轨迹测试结果

    Figure 7.  Test results of motion trajectory on KITTI dataset 0 and dataset 5 for each algorithms

    表  1  KITTI数据集第0组~第4组图像序列重投影误差统计结果

    Table  1.   Statistic results of reprojection error of KITTI dataset from 0 to 4th group of image sequence

    数据集/组 0 1 2 3 4
    平均重投影误差/像素 1.29 0.99 1.06 0.93 0.89
    下载: 导出CSV

    表  2  ORB特征与归一化HARRIS特征KITTI数据集上解算的误差

    Table  2.   Estimation error for KITTI dataset using ORB feature and normalized HARRIS feature

    选用特征 是否使用特征交叉检验 平移误差/% 旋转误差/((°)·m-1)
    归一化HARRIS 1.59 0.0065
    2.44 0.0134
    ORB 1.89 0.0086
    2.67 0.0156
    下载: 导出CSV

    表  3  KITTI数据集中第0组~第10组图像序列的姿态解算平均运行时间

    Table  3.   Average pose estimation processing time of KITTI dataset from 0 to 10th group of image sequence

    ms
    选用特征 特征检测 特征匹配 姿态估计 总运行时间
    归一化HARRIS 25 62 11 98
    ORB 15 117 11 143
    下载: 导出CSV

    表  4  不同算法在KITTI数据集上第0组~第10组数据姿态解算平均误差及平均速度统计

    Table  4.   Average pose estimation errors and processing time of different algorithms on KITTI dataset from 0 to 10

    算法 平移误差/% 旋转误差/((°)·m-1) 平均速度/s
    RotRocc[18] 1.25 0.0041 0.200
    VISO2-S[5] 2.44 0.0114 0.050
    本文算法 1.59 0.0065 0.098
    下载: 导出CSV
  • [1] NISTER D, NARODITSKY O, BERGEN J.Visual odometry[C]//Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.Piscataway, NJ: IEEE Press, 2004: 652-659.
    [2] FORSTER C, PIZZOLI M, SCARAMUZZA D.SVO: Fast semi-direct monocular visual odometry[C]//IEEE International Conference on Robotics and Automation.Piscataway, NJ: IEEE Press, 2014: 15-22.
    [3] DAVISON A J.Real-time simultaneous localisation and mapping with a single camera[C]//Proceedings 9th IEEE International Conference on Computer Vision.Piscataway, NJ: IEEE Press, 2008: 1403.
    [4] KITT B, GEIGER A, LATEGAHN H.Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme[C]//Intelligent Vehicles Symposium.Piscataway, NJ: IEEE Press, 2010: 486-492. https://ieeexplore.ieee.org/abstract/document/5548123
    [5] GEIGER A, ZIEGLER J, STILLER C.StereoScan: Dense 3d reconstruction in real-time[C]//Intelligent Vehicles Symposium(Ⅳ).Piscataway, NJ: IEEE Press, 2011: 963-968. https://ieeexplore.ieee.org/document/5940405
    [6] CVIŠIĆI, PETROVIĆ I.Stereo odometry based on careful feature selection and tracking[C]//2015 European Conference on Mobile Robots (ECMR).Piscataway, NJ: IEEE Press, 2015: 1-6. https://ieeexplore.ieee.org/abstract/document/7324219
    [7] BADINO H, YAMAMOTO A, KANADE T.Visual odometry by multi-frame feature integration[C]//2013 IEEE International Conference on Computer Vision Workshops.Piscataway, NJ: IEEE Press, 2013: 222-229. https://www.ri.cmu.edu/pub_files/2013/12/badino_cvad13.pdf
    [8] GEIGER A, LENZ P, URTASUN R.Are we ready for autonomous driving The KITTI vision benchmark suite[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition.Piscataway, NJ: IEEE Press, 2012: 3354-3361.
    [9] PERIS M, MAKI A, MARTULL S, et al.Towards a simulation driven stereo vision system[C]//International Conference on Pattern Recognition.Piscataway, NJ: IEEE Press, 2012: 1038-1042. https://ieeexplore.ieee.org/document/6460313
    [10] SCARAMUZZA D, FRAUNDORFER F.Visual odometry[Tutorial] [J].IEEE Robotics & Automation Magazine, 2011, 18(4):80-92. http://d.old.wanfangdata.com.cn/NSTLQK/NSTL_QKJJ0224794183/
    [11] FRAUNDORFER F, SCARAMUZZA D.Visual odometry:Part Ⅱ:Matching, robustness, optimization, and applications[J].IEEE Robotics & Automation Magazine, 2012, 19(2):78-90. http://d.old.wanfangdata.com.cn/NSTLQK/NSTL_QKJJ0231449634/
    [12] ENGEL J, STURM J, CREMERS D.Semi-dense visual odometry for a monocular camera[C]//IEEE International Conference on Computer Vision.Piscataway, NJ: IEEE Press, 2014: 1449-1456.
    [13] BEALL C, LAWRENCE B J, ILA V, et al.3D reconstruction of underwater structures[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems.Piscataway, NJ: IEEE Press, 2010: 4418-4423. https://www.cc.gatech.edu/~dellaert/pub/Beall10iros.pdf
    [14] HOWARD A.Real-time stereo visual odometry for autonomous ground vehicles[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems.Piscataway, NJ: IEEE Press, 2008: 3946-3952. https://ieeexplore.ieee.org/abstract/document/4651147
    [15] MUR-ARTAL R, MONTIEL J M M, TARDÓS J D.ORB-SLAM:A versatile and accurate monocular SLAM system[J].IEEE Transactions on Robotics, 2015, 31(5):1147-1163. doi: 10.1109/TRO.2015.2463671
    [16] KAESS M, NI K, DELLAERT F.Flow separation for fast and robust stereo odometry[C]//IEEE International Conference on Robotics and Automation.Piscataway, NJ: IEEE Press, 2009: 973-978. http://people.csail.mit.edu/kaess/pub/Kaess09icra.pdf
    [17] DEIGMOELLER J, EGGERT J.Stereo visual odometry without temporal filtering[C]//Pattern recognition.Berlin: Springer, 2016: 166-175. doi: 10.1007%2F978-3-319-45886-1_14
    [18] BUCZKO M, WILLERT V.Flow-decoupled normalized reprojection error for visual odometry[C]//IEEE International Conference on Intelligent Transportation Systems.Piscataway, NJ: IEEE Press, 2016: 1161-1167. https://www.researchgate.net/publication/309731985_Flow-Decoupled_Normalized_Reprojection_Error_for_Visual_Odometry
    [19] KLEIN G, MURRAY D.Parallel tracking and mapping for small AR workspaces[C]//IEEE and ACM International Symposium on Mixed and Augmented Reality.Piscataway, NJ: IEEE Press, 2008: 1-10. https://ieeexplore.ieee.org/document/4538852
    [20] TRIGGS B, MCLAUCHLAN P F, HARTLEY R I, et al.Bundle adjustment-A modern synthesis[C]//International Workshop on Vision Algorithms.Berlin: Springer, 1999: 298-372. https://lear.inrialpes.fr/pubs/2000/TMHF00/Triggs-va99.pdf
  • 加载中
图(7) / 表(4)
计量
  • 文章访问数:  751
  • HTML全文浏览量:  118
  • PDF下载量:  385
  • 被引次数: 0
出版历程
  • 收稿日期:  2018-03-16
  • 录用日期:  2018-04-08
  • 网络出版日期:  2018-11-20

目录

    /

    返回文章
    返回
    常见问答