留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于道路特征信息的车道结构化解析

罗胜 赵丽 王慕抽

罗胜, 赵丽, 王慕抽等 . 基于道路特征信息的车道结构化解析[J]. 北京航空航天大学学报, 2020, 46(9): 1643-1649. doi: 10.13700/j.bh.1001-5965.2020.0079
引用本文: 罗胜, 赵丽, 王慕抽等 . 基于道路特征信息的车道结构化解析[J]. 北京航空航天大学学报, 2020, 46(9): 1643-1649. doi: 10.13700/j.bh.1001-5965.2020.0079
LUO Sheng, ZHAO Li, WANG Muchouet al. Lane semantic analysis based on road feature information[J]. Journal of Beijing University of Aeronautics and Astronautics, 2020, 46(9): 1643-1649. doi: 10.13700/j.bh.1001-5965.2020.0079(in Chinese)
Citation: LUO Sheng, ZHAO Li, WANG Muchouet al. Lane semantic analysis based on road feature information[J]. Journal of Beijing University of Aeronautics and Astronautics, 2020, 46(9): 1643-1649. doi: 10.13700/j.bh.1001-5965.2020.0079(in Chinese)

基于道路特征信息的车道结构化解析

doi: 10.13700/j.bh.1001-5965.2020.0079
基金项目: 

浙江省自然科学基金 LQ19F020005

浙江省公益项目 2016C31117

温州市科技计划 2018ZG021

详细信息
    作者简介:

    罗胜  男,博士,副教授。主要研究方向:机器视觉、机器学习和机器人

    赵丽  女,硕士,助理研究员。主要研究方向:检测、跟踪、视频分析和机器学习

    王慕抽  男,硕士,助理研究员。主要研究方向:图像处理、计算机视觉和机器学习

    通讯作者:

    赵丽.E-mail:lizhao@wzu.edu.cn

  • 中图分类号: TP391.4

Lane semantic analysis based on road feature information

Funds: 

Zhejiang Provincial Natural Science Foundation of China LQ19F020005

Zhejiang Province Public Welfare Technology Project 2016C31117

Wenzhou Science and Technology Project 2018ZG021

More Information
  • 摘要:

    高速道路动态执法要求车道检测算法能够结构化解析道路,但是基于传统手工设计特征的车道检测算法准确率和召回率不足,而基于深度学习的算法又对计算资源要求太高,因此提出基于道路特征信息的车道结构化解析算法。利用边缘点的梯度统计信息筛选Hough空间的候选点,用动态规划的方法在剩余的Hough空间候选点中寻找最合理的车道线组合,能够在较少计算资源的平台上准确地检测到道路上的全部车道。在自有数据的检测实验中,所提算法能够准确定位结构化和非结构化道路;在对比实验中,所提算法在准确率、召回率和计算速度上均比同类算法有所提高。

     

  • 图 1  本文算法流程

    Figure 1.  Flowchart of proposed algorithm

    图 2  输入图像及感兴趣区域部分的边缘

    Figure 2.  Original image and edges of ROI

    图 3  梯度滤波器

    Figure 3.  Gradient filter

    图 4  边缘点梯度统计直方图

    Figure 4.  Gradient statistic histogram of edge points

    图 5  边缘的Hough空间图像

    Figure 5.  Image of Hough space of all edges

    图 6  Hough空间候选点的三次筛选

    Figure 6.  Candidate points in Hough space filtered for 3 times

    图 7  高速道路的车道检测结果

    Figure 7.  Lane detection results of expressways

    图 8  普通道路的车道检测结果

    Figure 8.  Lane detection results of urban roads

    表  1  不同算法的性能对比

    Table  1.   Comparison of performance between different algorithms

    算法 准确率/% 召回率/% 速度/FPS
    文献[8] 63.47 59.04 8.2
    文献[9] 76.29 73.34 32.7
    文献[13] 79.57 87.13 23.6
    LaneNet[14] 92.81 93.19 18.4
    UPSNet[19] 89.96 91.37 13.6
    本文算法 91.23 94.18 36.5
      注:FPS为帧/s。
    下载: 导出CSV
  • [1] ZHANG X Q, HU W M, XIE N H, et al.A robust tracking system for low frame rate video[J].International Journal of Computer Vision, 2015, 115(3):279-304. http://dl.acm.org/citation.cfm?id=2846563
    [2] ZHANG X Q, HU W M, CHEN S Y, et al.Graph-embedding-based learning for robust object tracking[J].IEEE Transactions on Industrial Electronics, 2014, 61(2):1072-1084. doi: 10.1109/TIE.2013.2258306
    [3] ZHANG X Q, WANG D, ZHOU Z Y, et al.Robust low-rank tensor recovery with rectification and alignment[J/OL].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019(2019-07-16)[2020-03-01].DOI: 10.1109/TPAMI.2019.2929043.
    [4] LINGWORTH J, KITTLER J.A survey of the Hough transforms[J].Computer Vision, Graphics, and Image Processing, 1988, 43(2):280-289. http://www.sciencedirect.com/science/article/pii/0734189X88900710
    [5] ZHANG X Q, HU W M, QU W, et al.Multiple object tracking via species-based particle swarm optimization[J].IEEE Transactions on Circuits and Systems for Video Technology, 2010, 20(11):1590-1602. doi: 10.1109/TCSVT.2010.2087455
    [6] ZHANG X Q, LI C C, HU W M, et al.Human pose estimation and tracking via parsing a tree structure based human model[J].IEEE Transactions on Systems, Man, and Cybernetics:Systems, 2014, 44(5):580-592. doi: 10.1109/TSMC.2013.2280438
    [7] ZHANG X Q, HU W M, BAO H J, et al.Robust head tracking based on multiple cues fusion in the kernel-Bayesian framework[J].IEEE Transactions on Circuits and Systems for Video Technology, 2013, 23(7):1197-1208. doi: 10.1109/TCSVT.2013.2241354
    [8] SHIN B S, TAO J, KLETTE R.A superparticle filter for lane detection[J].Pattern Recognition, 2015, 48(11):3333-3345. doi: 10.1016/j.patcog.2014.10.011
    [9] SATZODA R K, SATHYANARAYANA S, SRIKANTHAN T, et al.Hierarchical additive Hough transform for lane detection[J].IEEE Embedded Systems Letters, 2010, 2(2):23-26. doi: 10.1109/LES.2010.2051412
    [10] KATRU A.Improved parallel lane detection using modified additive Hough transform[J].International Journal of Image, Graphics and Signal Processing, 2016, 8(11):10-17. doi: 10.5815/ijigsp.2016.11.02
    [11] WANG Y, TEOH E K, SHEN D.Lane detection and tracking using B-snake[J].Image and Vision Computing, 2004, 22(4):269-280. http://www.sciencedirect.com/science/article/pii/S0262885603002105
    [12] JUNG C R, KELBER C R.Lane following and lane departure using a linear parabolic model[J].Image and Vision Computing, 2005, 23(13):1192-1202. doi: 10.1016/j.imavis.2005.07.018
    [13] BERTOZZI M, BROGGI A.Gold:A parallel real-time stereo vision system for generic obstacle and lane detection[J].IEEE Transactions on Image Processing, 1998, 7(1):62-81. http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=650851
    [14] WANG Z, REN W Q, QIU Q.LaneNet: Real-time lane detection networks for autonomous driving[EB/OL].(2018-07-04)[2020-03-01].https://arxiv.org/abs/1807.01726.
    [15] GARNETT N, COHEN R, PE-ER T, et al.3D-LaneNet: End-to-end 3D multiple lane detection[EB/OL].(2018-11-26)[2020-03-01].https://arxiv.org/abs/1811.10203.
    [16] NEVEN D, BRABANDERE B D, GEORGOULIS S, et al.Towards end-to-end lane detection: An instance segmentation approach[C]//IEEE Intelligent Vehicles Symposium(IV).Piscataway: IEEE Press, 2018: 286-291. Towards end-to-end lane detection: An instance segmentation approach
    [17] LIANG D, GUO Y C, ZHANG S K, et al.LineNet: A zoomable CNN for crowdsourced high definition maps modeling in urban environments[EB/OL].(2018-07-16)[2020-03-01].https://arxiv.org/abs/1807.05696v1.
    [18] PAN X, SHI J, LUO P, et al.Spatial as deep: Spatial CNN for traffic scene understanding[C]//The 32nd AAAI Conference on Artificial Intelligence(AAAI-18).Palo Alto: AAAI, 2019.
    [19] XIONG Y W, LIAO R J, ZHAO H S, et al.UPSNet: A unified panoptic segmentation network[C]//IEEE Conference on Computer Vision and Pattern Recognition(CVPR).Piscataway: IEEE Press, 2019. https://www.researchgate.net/publication/330382643_UPSNet_A_Unified_Panoptic_Segmentation_Network
    [20] DING Y, XU Z, ZHANG Y B, et al.Fast lane detection based on bird's eye view and improved random sample consensus algorithm[J].Multimedia Tools and Applications, 2017, 76(21):22979-22998. doi: 10.1007/s11042-016-4184-6
    [21] GEIGER A, LENZ P, STILLER C, et al.Vision meets robotics:The KITTI dataset[J].International Journal of Robotics Research, 2013, 32(11):1231-1237. doi: 10.1177/0278364913491297
  • 加载中
图(8) / 表(1)
计量
  • 文章访问数:  590
  • HTML全文浏览量:  223
  • PDF下载量:  233
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-03-03
  • 录用日期:  2020-03-20
  • 网络出版日期:  2020-09-20

目录

    /

    返回文章
    返回
    常见问答