-
摘要:
高速道路动态执法要求车道检测算法能够结构化解析道路,但是基于传统手工设计特征的车道检测算法准确率和召回率不足,而基于深度学习的算法又对计算资源要求太高,因此提出基于道路特征信息的车道结构化解析算法。利用边缘点的梯度统计信息筛选Hough空间的候选点,用动态规划的方法在剩余的Hough空间候选点中寻找最合理的车道线组合,能够在较少计算资源的平台上准确地检测到道路上的全部车道。在自有数据的检测实验中,所提算法能够准确定位结构化和非结构化道路;在对比实验中,所提算法在准确率、召回率和计算速度上均比同类算法有所提高。
Abstract:Law enforcement on express roads in moving car requires to semantically analyze the road by lane detection algorithm, but the accuracy and recall rate of the algorithms based on human-crafted features are not good enough, and the algorithms based on deep learning require too much computing resource. Therefore, this paper proposes a semantical analysis algorithm based on road feature information. The proposed algorithm makes use of the gradient statistical information of edge points to filter out the candidate points in Hough space, and dynamic programming to find the most reasonable solution of lane line combination among the remaining candidate points. Thus it can accurately find all lane markings on roads with less computing resource. The experiment with self acquisition of data shows that the proposed method can structurally find all lanes on structured and unstructured roads. In a comparative experiment, contrasted with some other traditional lane detection methods and some deep learning networks, the proposed algorithm demonstrates its improvement in accuracy, recall rate and computing speed.
-
Key words:
- lane detection /
- road analysis /
- dynamic law enforcement /
- Hough transformation /
- dynamic planning
-
[1] ZHANG X Q, HU W M, XIE N H, et al.A robust tracking system for low frame rate video[J].International Journal of Computer Vision, 2015, 115(3):279-304. http://dl.acm.org/citation.cfm?id=2846563 [2] ZHANG X Q, HU W M, CHEN S Y, et al.Graph-embedding-based learning for robust object tracking[J].IEEE Transactions on Industrial Electronics, 2014, 61(2):1072-1084. doi: 10.1109/TIE.2013.2258306 [3] ZHANG X Q, WANG D, ZHOU Z Y, et al.Robust low-rank tensor recovery with rectification and alignment[J/OL].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019(2019-07-16)[2020-03-01].DOI: 10.1109/TPAMI.2019.2929043. [4] LINGWORTH J, KITTLER J.A survey of the Hough transforms[J].Computer Vision, Graphics, and Image Processing, 1988, 43(2):280-289. http://www.sciencedirect.com/science/article/pii/0734189X88900710 [5] ZHANG X Q, HU W M, QU W, et al.Multiple object tracking via species-based particle swarm optimization[J].IEEE Transactions on Circuits and Systems for Video Technology, 2010, 20(11):1590-1602. doi: 10.1109/TCSVT.2010.2087455 [6] ZHANG X Q, LI C C, HU W M, et al.Human pose estimation and tracking via parsing a tree structure based human model[J].IEEE Transactions on Systems, Man, and Cybernetics:Systems, 2014, 44(5):580-592. doi: 10.1109/TSMC.2013.2280438 [7] ZHANG X Q, HU W M, BAO H J, et al.Robust head tracking based on multiple cues fusion in the kernel-Bayesian framework[J].IEEE Transactions on Circuits and Systems for Video Technology, 2013, 23(7):1197-1208. doi: 10.1109/TCSVT.2013.2241354 [8] SHIN B S, TAO J, KLETTE R.A superparticle filter for lane detection[J].Pattern Recognition, 2015, 48(11):3333-3345. doi: 10.1016/j.patcog.2014.10.011 [9] SATZODA R K, SATHYANARAYANA S, SRIKANTHAN T, et al.Hierarchical additive Hough transform for lane detection[J].IEEE Embedded Systems Letters, 2010, 2(2):23-26. doi: 10.1109/LES.2010.2051412 [10] KATRU A.Improved parallel lane detection using modified additive Hough transform[J].International Journal of Image, Graphics and Signal Processing, 2016, 8(11):10-17. doi: 10.5815/ijigsp.2016.11.02 [11] WANG Y, TEOH E K, SHEN D.Lane detection and tracking using B-snake[J].Image and Vision Computing, 2004, 22(4):269-280. http://www.sciencedirect.com/science/article/pii/S0262885603002105 [12] JUNG C R, KELBER C R.Lane following and lane departure using a linear parabolic model[J].Image and Vision Computing, 2005, 23(13):1192-1202. doi: 10.1016/j.imavis.2005.07.018 [13] BERTOZZI M, BROGGI A.Gold:A parallel real-time stereo vision system for generic obstacle and lane detection[J].IEEE Transactions on Image Processing, 1998, 7(1):62-81. http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=650851 [14] WANG Z, REN W Q, QIU Q.LaneNet: Real-time lane detection networks for autonomous driving[EB/OL].(2018-07-04)[2020-03-01].https://arxiv.org/abs/1807.01726. [15] GARNETT N, COHEN R, PE-ER T, et al.3D-LaneNet: End-to-end 3D multiple lane detection[EB/OL].(2018-11-26)[2020-03-01].https://arxiv.org/abs/1811.10203. [16] NEVEN D, BRABANDERE B D, GEORGOULIS S, et al.Towards end-to-end lane detection: An instance segmentation approach[C]//IEEE Intelligent Vehicles Symposium(IV).Piscataway: IEEE Press, 2018: 286-291. Towards end-to-end lane detection: An instance segmentation approach [17] LIANG D, GUO Y C, ZHANG S K, et al.LineNet: A zoomable CNN for crowdsourced high definition maps modeling in urban environments[EB/OL].(2018-07-16)[2020-03-01].https://arxiv.org/abs/1807.05696v1. [18] PAN X, SHI J, LUO P, et al.Spatial as deep: Spatial CNN for traffic scene understanding[C]//The 32nd AAAI Conference on Artificial Intelligence(AAAI-18).Palo Alto: AAAI, 2019. [19] XIONG Y W, LIAO R J, ZHAO H S, et al.UPSNet: A unified panoptic segmentation network[C]//IEEE Conference on Computer Vision and Pattern Recognition(CVPR).Piscataway: IEEE Press, 2019. https://www.researchgate.net/publication/330382643_UPSNet_A_Unified_Panoptic_Segmentation_Network [20] DING Y, XU Z, ZHANG Y B, et al.Fast lane detection based on bird's eye view and improved random sample consensus algorithm[J].Multimedia Tools and Applications, 2017, 76(21):22979-22998. doi: 10.1007/s11042-016-4184-6 [21] GEIGER A, LENZ P, STILLER C, et al.Vision meets robotics:The KITTI dataset[J].International Journal of Robotics Research, 2013, 32(11):1231-1237. doi: 10.1177/0278364913491297