Volume 46 Issue 9
Sep.  2020
Turn off MathJax
Article Contents
LUO Sheng, ZHAO Li, WANG Muchouet al. Lane semantic analysis based on road feature information[J]. Journal of Beijing University of Aeronautics and Astronautics, 2020, 46(9): 1643-1649. doi: 10.13700/j.bh.1001-5965.2020.0079(in Chinese)
Citation: LUO Sheng, ZHAO Li, WANG Muchouet al. Lane semantic analysis based on road feature information[J]. Journal of Beijing University of Aeronautics and Astronautics, 2020, 46(9): 1643-1649. doi: 10.13700/j.bh.1001-5965.2020.0079(in Chinese)

Lane semantic analysis based on road feature information

doi: 10.13700/j.bh.1001-5965.2020.0079
Funds:

Zhejiang Provincial Natural Science Foundation of China LQ19F020005

Zhejiang Province Public Welfare Technology Project 2016C31117

Wenzhou Science and Technology Project 2018ZG021

More Information
  • Corresponding author: ZHAO Li.E-mail:lizhao@wzu.edu.cn
  • Received Date: 03 Mar 2020
  • Accepted Date: 20 Mar 2020
  • Publish Date: 20 Sep 2020
  • Law enforcement on express roads in moving car requires to semantically analyze the road by lane detection algorithm, but the accuracy and recall rate of the algorithms based on human-crafted features are not good enough, and the algorithms based on deep learning require too much computing resource. Therefore, this paper proposes a semantical analysis algorithm based on road feature information. The proposed algorithm makes use of the gradient statistical information of edge points to filter out the candidate points in Hough space, and dynamic programming to find the most reasonable solution of lane line combination among the remaining candidate points. Thus it can accurately find all lane markings on roads with less computing resource. The experiment with self acquisition of data shows that the proposed method can structurally find all lanes on structured and unstructured roads. In a comparative experiment, contrasted with some other traditional lane detection methods and some deep learning networks, the proposed algorithm demonstrates its improvement in accuracy, recall rate and computing speed.

     

  • loading
  • [1]
    ZHANG X Q, HU W M, XIE N H, et al.A robust tracking system for low frame rate video[J].International Journal of Computer Vision, 2015, 115(3):279-304. http://dl.acm.org/citation.cfm?id=2846563
    [2]
    ZHANG X Q, HU W M, CHEN S Y, et al.Graph-embedding-based learning for robust object tracking[J].IEEE Transactions on Industrial Electronics, 2014, 61(2):1072-1084. doi: 10.1109/TIE.2013.2258306
    [3]
    ZHANG X Q, WANG D, ZHOU Z Y, et al.Robust low-rank tensor recovery with rectification and alignment[J/OL].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019(2019-07-16)[2020-03-01].DOI: 10.1109/TPAMI.2019.2929043.
    [4]
    LINGWORTH J, KITTLER J.A survey of the Hough transforms[J].Computer Vision, Graphics, and Image Processing, 1988, 43(2):280-289. http://www.sciencedirect.com/science/article/pii/0734189X88900710
    [5]
    ZHANG X Q, HU W M, QU W, et al.Multiple object tracking via species-based particle swarm optimization[J].IEEE Transactions on Circuits and Systems for Video Technology, 2010, 20(11):1590-1602. doi: 10.1109/TCSVT.2010.2087455
    [6]
    ZHANG X Q, LI C C, HU W M, et al.Human pose estimation and tracking via parsing a tree structure based human model[J].IEEE Transactions on Systems, Man, and Cybernetics:Systems, 2014, 44(5):580-592. doi: 10.1109/TSMC.2013.2280438
    [7]
    ZHANG X Q, HU W M, BAO H J, et al.Robust head tracking based on multiple cues fusion in the kernel-Bayesian framework[J].IEEE Transactions on Circuits and Systems for Video Technology, 2013, 23(7):1197-1208. doi: 10.1109/TCSVT.2013.2241354
    [8]
    SHIN B S, TAO J, KLETTE R.A superparticle filter for lane detection[J].Pattern Recognition, 2015, 48(11):3333-3345. doi: 10.1016/j.patcog.2014.10.011
    [9]
    SATZODA R K, SATHYANARAYANA S, SRIKANTHAN T, et al.Hierarchical additive Hough transform for lane detection[J].IEEE Embedded Systems Letters, 2010, 2(2):23-26. doi: 10.1109/LES.2010.2051412
    [10]
    KATRU A.Improved parallel lane detection using modified additive Hough transform[J].International Journal of Image, Graphics and Signal Processing, 2016, 8(11):10-17. doi: 10.5815/ijigsp.2016.11.02
    [11]
    WANG Y, TEOH E K, SHEN D.Lane detection and tracking using B-snake[J].Image and Vision Computing, 2004, 22(4):269-280. http://www.sciencedirect.com/science/article/pii/S0262885603002105
    [12]
    JUNG C R, KELBER C R.Lane following and lane departure using a linear parabolic model[J].Image and Vision Computing, 2005, 23(13):1192-1202. doi: 10.1016/j.imavis.2005.07.018
    [13]
    BERTOZZI M, BROGGI A.Gold:A parallel real-time stereo vision system for generic obstacle and lane detection[J].IEEE Transactions on Image Processing, 1998, 7(1):62-81. http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=650851
    [14]
    WANG Z, REN W Q, QIU Q.LaneNet: Real-time lane detection networks for autonomous driving[EB/OL].(2018-07-04)[2020-03-01].https://arxiv.org/abs/1807.01726.
    [15]
    GARNETT N, COHEN R, PE-ER T, et al.3D-LaneNet: End-to-end 3D multiple lane detection[EB/OL].(2018-11-26)[2020-03-01].https://arxiv.org/abs/1811.10203.
    [16]
    NEVEN D, BRABANDERE B D, GEORGOULIS S, et al.Towards end-to-end lane detection: An instance segmentation approach[C]//IEEE Intelligent Vehicles Symposium(IV).Piscataway: IEEE Press, 2018: 286-291. Towards end-to-end lane detection: An instance segmentation approach
    [17]
    LIANG D, GUO Y C, ZHANG S K, et al.LineNet: A zoomable CNN for crowdsourced high definition maps modeling in urban environments[EB/OL].(2018-07-16)[2020-03-01].https://arxiv.org/abs/1807.05696v1.
    [18]
    PAN X, SHI J, LUO P, et al.Spatial as deep: Spatial CNN for traffic scene understanding[C]//The 32nd AAAI Conference on Artificial Intelligence(AAAI-18).Palo Alto: AAAI, 2019.
    [19]
    XIONG Y W, LIAO R J, ZHAO H S, et al.UPSNet: A unified panoptic segmentation network[C]//IEEE Conference on Computer Vision and Pattern Recognition(CVPR).Piscataway: IEEE Press, 2019. https://www.researchgate.net/publication/330382643_UPSNet_A_Unified_Panoptic_Segmentation_Network
    [20]
    DING Y, XU Z, ZHANG Y B, et al.Fast lane detection based on bird's eye view and improved random sample consensus algorithm[J].Multimedia Tools and Applications, 2017, 76(21):22979-22998. doi: 10.1007/s11042-016-4184-6
    [21]
    GEIGER A, LENZ P, STILLER C, et al.Vision meets robotics:The KITTI dataset[J].International Journal of Robotics Research, 2013, 32(11):1231-1237. doi: 10.1177/0278364913491297
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(8)  / Tables(1)

    Article Metrics

    Article views(617) PDF downloads(233) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return