Volume 50 Issue 7
Jul.  2024
Turn off MathJax
Article Contents
HU D D,ZHANG Z T,NIU G C. Lane line detection incorporating CBAM mechanism and deformable convolutional network[J]. Journal of Beijing University of Aeronautics and Astronautics,2024,50(7):2150-2160 (in Chinese) doi: 10.13700/j.bh.1001-5965.2022.0601
Citation: HU D D,ZHANG Z T,NIU G C. Lane line detection incorporating CBAM mechanism and deformable convolutional network[J]. Journal of Beijing University of Aeronautics and Astronautics,2024,50(7):2150-2160 (in Chinese) doi: 10.13700/j.bh.1001-5965.2022.0601

Lane line detection incorporating CBAM mechanism and deformable convolutional network

doi: 10.13700/j.bh.1001-5965.2022.0601
Funds:  Tianjin Science and Technology Plan (17ZXHLGX00120); The Fundamental Research Funds for the Central Universities (3122022PY17,3122017003)
More Information
  • Corresponding author: E-mail:niu_guochen@139.com
  • Received Date: 09 Jul 2022
  • Accepted Date: 19 Nov 2022
  • Available Online: 23 Dec 2022
  • Publish Date: 15 Dec 2022
  • In order to meet the accuracy and real-time requirements of autonomous driving and advanced driver assistance systems (ADAS) for lane line detection, a CADCN lane line detection method incorporating convolutional block attention module (CBAM) mechanism and deformable convolutional network (DCN) was proposed. Firstly, the CBAM mechanism was embedded in the feature extraction module to enhance the useful features and suppress the useless feature responses. Secondly, DCN was used to replace the conventional convolutional network, and the geometric deformation of lane lines was learned by sampling with offset to improve the modeling capability of the convolution kernel. Finally, based on the idea of row anchor classification, the location point along the row was selected and classified, so as to predict the lane line location information and thus improve the real-time performance of the lane line detection model. The CADCN model was trained and validated on the public lane line dataset. While ensuring real-time performance, the accuracy rate of the model on the TuSimple dataset reaches 96.63%, and the comprehensive evaluation index F1 on the CULane dataset reaches 74.4%, which verifies the effectiveness of the algorithm.

     

  • loading
  • [1]
    WAYKOLE S, SHIWAKOTI N, STASINOPOULOS P. Review on lane detection and tracking algorithms of advanced driver assistance system[J]. Sustainability, 2021, 13(20): 11417. doi: 10.3390/su132011417
    [2]
    杜恩宇, 张宁, 李艳荻. 基于Gabor滤波器的车道线快速检测方法[J]. 红外与激光工程, 2018, 47(8): 314-321.

    DU E Y, ZHANG N, LI Y D. Lane line quick detection method based on Gabor filter[J]. Infrared and Laser Engineering, 2018, 47(8): 314-321(in Chinese).
    [3]
    ANDRADE D C, BUENO F, FRANCO F R, et al. A novel strategy for road lane detection and tracking based on a vehicle’s forward monocular camera[J]. IEEE Transactions on Intelligent Transportation Systems, 2019, 20(4): 1497-1507. doi: 10.1109/TITS.2018.2856361
    [4]
    LUO S, ZHANG X Q, HU J, et al. Multiple lane detection via combining complementary structural constraints[J]. IEEE Transactions on Intelligent Transportation Systems, 2021, 22(12): 7597-7606. doi: 10.1109/TITS.2020.3005396
    [5]
    蔡英凤, 张田田, 王海, 等. 基于实例分割和自适应透视变换算法的多车道线检测[J]. 东南大学学报(自然科学版), 2020, 50(4): 775-781. doi: 10.3969/j.issn.1001-0505.2020.04.023

    CAI Y F, ZHANG T T, WANG H, et al. Multi-lane detection based on instance segmentation and adaptive perspective transformation[J]. Journal of Southeast University (Natural Science Edition), 2020, 50(4): 775-781(in Chinese). doi: 10.3969/j.issn.1001-0505.2020.04.023
    [6]
    KIM J, LEE M. Robust lane detection based on convolutional neural network and random sample consensus[C]//Proceeding of the International Conference on Neural Information Processing. Berlin: Springer, 2014: 454-461.
    [7]
    LI J,MEI X,PROKHOROV D,et al. Deep neural network for structural prediction and lane detection in traffic scene[J]. IEEE Transactions on Neural Networks and Learning Systems, 2017, 28(3): 690-703. doi: 10.1109/TNNLS.2016.2522428
    [8]
    PAN X G, SHI J P, LUO P, et al. Spatial as deep: Spatial CNN for traffic scene understanding[C]//Proceeding of the Thirty-Second AAAI Conference on Artificial Intelligence. Palo Alco: AAAI, 2018: 7276-7283.
    [9]
    HOU Y N, MA Z, LIU C X, et al. Learning lightweight lane detection CNNs by self attention distillation[C]//Proceeding of the IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE Press, 2019: 1013-1021.
    [10]
    NEVEN D, DE BRABANDERE B, GEORGOULIS S, et al. Towards end-to-end lane detection: An instance segmentation approach[C]//Proceeding of the IEEE Intelligent Vehicles Symposium (IV). Piscataway: IEEE Press, 2018: 286-291.
    [11]
    TABELINI L, BERRIEL R, PAIXÃO T M, et al. PolyLaneNet: Lane estimation via deep polynomial regression[C]//Proceeding of the 25th International Conference on Pattern Recognition. Piscataway: IEEE Press, 2021: 6150-6156.
    [12]
    WOO S, PARK J, LEE J Y, et al. CBAM: Convolutional block attention module[C]//Proceeding of the European Conference on Computer Vision. Berlin: Springer, 2018: 3-19.
    [13]
    DAI J F, QI H Z, XIONG Y W, et al. Deformable convolutional networks[C]/Proceeding of the IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2017: 764-773.
    [14]
    HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2016: 770-778.
    [15]
    ZHU X Z, HU H, LIN S, et al. Deformable ConvNets V2: More deformable, better results[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2019: 9300-9308.
    [16]
    QIN Z Q, WANG H Y, LI X. Ultra fast structure-aware deep lane detection[C]//Proceedings of the European Conference on Computer Vision. Berlin: Springer, 2020: 276-291.
    [17]
    CHEN L C, PAPANDREOU G, KOKKINOS I, et al. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4): 834-848. doi: 10.1109/TPAMI.2017.2699184
    [18]
    PHILION J. FastDraw: Addressing the long tail of lane detection by adapting a sequential prediction network[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2019: 11574-11583.
    [19]
    KO Y, LEE Y, AZAM S, et al. Key points estimation and point instance segmentation approach for lane detection[C]//Proceedings of the IEEE Transactions on Intelligent Transportation Systems. Piscataway: IEEE Press, 2022: 8949-8958.
    [20]
    GHAFOORIAN M, NUGTEREN C, BAKA N, et al. EL-GAN: Embedding loss driven generative adversarial networks for lane detection[C]//Proceedings of the European Conference on Computer Vision. Berlin: Springer, 2019: 256-272.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(10)  / Tables(5)

    Article Metrics

    Article views(610) PDF downloads(23) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return