-
摘要:
在自动驾驶和机器人导航系统中,里程计是用于持续获得系统姿态信息的一种装置。视觉里程计能以较低代价获得高精度的目标移动轨迹,基于特征的视觉里程计方法具有时间复杂度较低、计算速度快的优势,有助于数据实时处理。然而,传统基于特征的视觉里程计方法面临着2个技术瓶颈:特征匹配的准确度不足;姿态解算中目标函数的权重值有效性低。为了解决帧间特征匹配准确度不足的问题,本文提出特征交叉检验闭环匹配策略,即在传统单向闭环匹配的基础上,增加反向验证的过程,以获得匹配准确度更高的匹配点集合。该策略解决了传统特征匹配中使用单向闭环匹配策略鲁棒性不足、内点比例低的缺陷,提高了解算精度。同时在交叉检验匹配策略中利用前一时刻的运动信息缩小当前时刻特征匹配的搜索范围,降低特征点匹配的时间复杂度。针对目标函数的权重值有效性低的问题,本文将特征点在图像序列中的出现次数作为其生存周期,提出基于特征点生存周期的目标函数权值设置方法。在姿态解算中,特征点的生存周期可以有效反映其稳定性,使用其作为目标函数权值可以降低解算过程中的累积误差。本文在公开的KITTI数据集中进行算法测试,实验结果证明该方法可以实现高精度、实时的视觉里程计算。
Abstract:Odometry is widely applied for continuously obtaining system poses in automatic drive system and robot navigation system. Visual odometry can achieve high precision of target motion trajectory estimation with low cost, while feature-based visual odometry has the advantages of low time complexity and high processing speed which are conducive to real-time processing. However, traditional feature-based visual odometry has two technical bottlenecks:low accuracy of feature detection and matching, and the low effectiveness of objective function weight in pose estimation. To address the low accuracy for the feature matching between frames, we present the crosscheck feature matching strategy. It adds the reverse check on the foundation of traditional single-track 'circle' matching strategy to obtain more accurate matching feature sets. This strategy increases inlier ratio and solves the low robustness problem in a single-track 'circle' strategy, which improves estimation accuracy. Meanwhile, we use motion information of previous frame to reduce the searching scope of current frame in crosscheck strategy. To address the low effectiveness of objective function weight, we use the occurrence number of features as its life cycle and present a objective function weight setting method that adaptively considers the life cycle of extracted features. In pose estimation, the life cycle of feature can reflect the stability of features and the objective function weight based on it can decrease the accumulative error. We evaluate the proposed method on publicly available KITTI dataset. The experimental results demonstrate that the proposed method can achieve high-accuracy real-time visual odometry calculation.
-
Key words:
- feature matching /
- visual odometry /
- crosscheck /
- pose estimation /
- real-time
-
表 1 KITTI数据集第0组~第4组图像序列重投影误差统计结果
Table 1. Statistic results of reprojection error of KITTI dataset from 0 to 4th group of image sequence
数据集/组 0 1 2 3 4 平均重投影误差/像素 1.29 0.99 1.06 0.93 0.89 表 2 ORB特征与归一化HARRIS特征KITTI数据集上解算的误差
Table 2. Estimation error for KITTI dataset using ORB feature and normalized HARRIS feature
选用特征 是否使用特征交叉检验 平移误差/% 旋转误差/((°)·m-1) 归一化HARRIS 是 1.59 0.0065 否 2.44 0.0134 ORB 是 1.89 0.0086 否 2.67 0.0156 表 3 KITTI数据集中第0组~第10组图像序列的姿态解算平均运行时间
Table 3. Average pose estimation processing time of KITTI dataset from 0 to 10th group of image sequence
ms 选用特征 特征检测 特征匹配 姿态估计 总运行时间 归一化HARRIS 25 62 11 98 ORB 15 117 11 143 -
[1] NISTER D, NARODITSKY O, BERGEN J.Visual odometry[C]//Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.Piscataway, NJ: IEEE Press, 2004: 652-659. [2] FORSTER C, PIZZOLI M, SCARAMUZZA D.SVO: Fast semi-direct monocular visual odometry[C]//IEEE International Conference on Robotics and Automation.Piscataway, NJ: IEEE Press, 2014: 15-22. [3] DAVISON A J.Real-time simultaneous localisation and mapping with a single camera[C]//Proceedings 9th IEEE International Conference on Computer Vision.Piscataway, NJ: IEEE Press, 2008: 1403. [4] KITT B, GEIGER A, LATEGAHN H.Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme[C]//Intelligent Vehicles Symposium.Piscataway, NJ: IEEE Press, 2010: 486-492. https://ieeexplore.ieee.org/abstract/document/5548123 [5] GEIGER A, ZIEGLER J, STILLER C.StereoScan: Dense 3d reconstruction in real-time[C]//Intelligent Vehicles Symposium(Ⅳ).Piscataway, NJ: IEEE Press, 2011: 963-968. https://ieeexplore.ieee.org/document/5940405 [6] CVIŠIĆI, PETROVIĆ I.Stereo odometry based on careful feature selection and tracking[C]//2015 European Conference on Mobile Robots (ECMR).Piscataway, NJ: IEEE Press, 2015: 1-6. https://ieeexplore.ieee.org/abstract/document/7324219 [7] BADINO H, YAMAMOTO A, KANADE T.Visual odometry by multi-frame feature integration[C]//2013 IEEE International Conference on Computer Vision Workshops.Piscataway, NJ: IEEE Press, 2013: 222-229. https://www.ri.cmu.edu/pub_files/2013/12/badino_cvad13.pdf [8] GEIGER A, LENZ P, URTASUN R.Are we ready for autonomous driving The KITTI vision benchmark suite[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition.Piscataway, NJ: IEEE Press, 2012: 3354-3361. [9] PERIS M, MAKI A, MARTULL S, et al.Towards a simulation driven stereo vision system[C]//International Conference on Pattern Recognition.Piscataway, NJ: IEEE Press, 2012: 1038-1042. https://ieeexplore.ieee.org/document/6460313 [10] SCARAMUZZA D, FRAUNDORFER F.Visual odometry[Tutorial] [J].IEEE Robotics & Automation Magazine, 2011, 18(4):80-92. http://d.old.wanfangdata.com.cn/NSTLQK/NSTL_QKJJ0224794183/ [11] FRAUNDORFER F, SCARAMUZZA D.Visual odometry:Part Ⅱ:Matching, robustness, optimization, and applications[J].IEEE Robotics & Automation Magazine, 2012, 19(2):78-90. http://d.old.wanfangdata.com.cn/NSTLQK/NSTL_QKJJ0231449634/ [12] ENGEL J, STURM J, CREMERS D.Semi-dense visual odometry for a monocular camera[C]//IEEE International Conference on Computer Vision.Piscataway, NJ: IEEE Press, 2014: 1449-1456. [13] BEALL C, LAWRENCE B J, ILA V, et al.3D reconstruction of underwater structures[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems.Piscataway, NJ: IEEE Press, 2010: 4418-4423. https://www.cc.gatech.edu/~dellaert/pub/Beall10iros.pdf [14] HOWARD A.Real-time stereo visual odometry for autonomous ground vehicles[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems.Piscataway, NJ: IEEE Press, 2008: 3946-3952. https://ieeexplore.ieee.org/abstract/document/4651147 [15] MUR-ARTAL R, MONTIEL J M M, TARDÓS J D.ORB-SLAM:A versatile and accurate monocular SLAM system[J].IEEE Transactions on Robotics, 2015, 31(5):1147-1163. doi: 10.1109/TRO.2015.2463671 [16] KAESS M, NI K, DELLAERT F.Flow separation for fast and robust stereo odometry[C]//IEEE International Conference on Robotics and Automation.Piscataway, NJ: IEEE Press, 2009: 973-978. http://people.csail.mit.edu/kaess/pub/Kaess09icra.pdf [17] DEIGMOELLER J, EGGERT J.Stereo visual odometry without temporal filtering[C]//Pattern recognition.Berlin: Springer, 2016: 166-175. doi: 10.1007%2F978-3-319-45886-1_14 [18] BUCZKO M, WILLERT V.Flow-decoupled normalized reprojection error for visual odometry[C]//IEEE International Conference on Intelligent Transportation Systems.Piscataway, NJ: IEEE Press, 2016: 1161-1167. https://www.researchgate.net/publication/309731985_Flow-Decoupled_Normalized_Reprojection_Error_for_Visual_Odometry [19] KLEIN G, MURRAY D.Parallel tracking and mapping for small AR workspaces[C]//IEEE and ACM International Symposium on Mixed and Augmented Reality.Piscataway, NJ: IEEE Press, 2008: 1-10. https://ieeexplore.ieee.org/document/4538852 [20] TRIGGS B, MCLAUCHLAN P F, HARTLEY R I, et al.Bundle adjustment-A modern synthesis[C]//International Workshop on Vision Algorithms.Berlin: Springer, 1999: 298-372. https://lear.inrialpes.fr/pubs/2000/TMHF00/Triggs-va99.pdf