留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于朝向约束和重识别特征的目标轨迹关联方法

艾明晶 单国志 刘鹏高 杨澎钢

艾明晶, 单国志, 刘鹏高, 等 . 基于朝向约束和重识别特征的目标轨迹关联方法[J]. 北京航空航天大学学报, 2022, 48(6): 957-967. doi: 10.13700/j.bh.1001-5965.2021.0089
引用本文: 艾明晶, 单国志, 刘鹏高, 等 . 基于朝向约束和重识别特征的目标轨迹关联方法[J]. 北京航空航天大学学报, 2022, 48(6): 957-967. doi: 10.13700/j.bh.1001-5965.2021.0089
AI Mingjing, SHAN Guozhi, LIU Penggao, et al. Target trajectory association method based on orientation constraint and re-identification feature[J]. Journal of Beijing University of Aeronautics and Astronautics, 2022, 48(6): 957-967. doi: 10.13700/j.bh.1001-5965.2021.0089(in Chinese)
Citation: AI Mingjing, SHAN Guozhi, LIU Penggao, et al. Target trajectory association method based on orientation constraint and re-identification feature[J]. Journal of Beijing University of Aeronautics and Astronautics, 2022, 48(6): 957-967. doi: 10.13700/j.bh.1001-5965.2021.0089(in Chinese)

基于朝向约束和重识别特征的目标轨迹关联方法

doi: 10.13700/j.bh.1001-5965.2021.0089
基金项目: 

国家自然科学基金 61872024

国家重点研发计划 2018YFB2100601

详细信息
    通讯作者:

    艾明晶, E-mail: amj@buaa.edu.cn

  • 中图分类号: TP391.4; TP391

Target trajectory association method based on orientation constraint and re-identification feature

Funds: 

National Natural Science Foundation of China 61872024

National Key R & D Program of China 2018YFB2100601

More Information
  • 摘要:

    基于检测关联和深度学习的目标轨迹关联方法是计算机视觉领域的研究热点之一,但现有方法设计中缺乏有效的时空约束,且目标表观特征泛化能力不足,在目标朝向差异明显的情况下会发生识别错误,在目标轨迹关联时会导致频繁的ID切换和错误关联。针对该问题,提出了一种基于朝向约束和重识别特征的目标轨迹关联方法。将行人朝向判别引入行人重识别中,提出了一种具有朝向约束力的行人重识别网络模型,提升了目标特征的表示能力。结合目标朝向、卡尔曼滤波得到的位置信息、重叠面积等时空特征,提出一种基于朝向约束的分层轨迹关联模型,得到单相机内的目标轨迹。在跨相机场景中,通过引入一种简单有效的双向竞争匹配机制,实现了目标轨迹的有效关联。实验结果表明: 所提方法在MOT数据集上度量指标优于多种方法,能够减少频繁的ID交换,有效解决了相似目标相向而行时的错误关联;帧率达到19.6帧/s,能够满足近实时场景下的使用要求。

     

  • 图 1  整体研究思路与技术路线

    Figure 1.  Overall research ideas and technical route

    图 2  朝向约束行人重识别网络结构示意图

    Figure 2.  Structure diagram of pedestrian re-identification network with orientation constraint

    图 3  基于朝向的距离矩阵计算示意图

    Figure 3.  Schematic diagram of distance matrix calculation based on orientation

    图 4  轨迹生命周期状态转换

    Figure 4.  Trajectory life cycle state transition

    图 5  单相机轨迹生成方法流程

    Figure 5.  Process of single camera trajectory generation

    图 6  跨相机目标轨迹关联示意图

    Figure 6.  Schematic diagram of cross camera target trajectory association

    图 7  行人重识别效果直观展示

    Figure 7.  Visual display of pedestrian re-identification effect

    图 8  轨迹关联效果对比

    Figure 8.  Comparison of trajectory association effect

    表  1  行人重识别实验结果对比

    Table  1.   Comparison of experimental results of pedestrian re-identification  %

    方法 Rank-1 Rank-5 mAP
    PCB[18] 92.3 97.2 77.4
    AlignedReID[19] 91.8 97.1 79.3
    PIE[20] 87.33 95.56 69.25
    GLAD[21] 89.9 73.9
    Spindle[22] 76.9 91.5
    HA-CNN[23] 91.2 75.7
    TriHard[15] 86.67 93.38 81.07
    HPM[24] 94.2 97.5 82.7
    PGR[25] 93.87 97.74 77.21
    OSCNN[26] 83.9 73.5
    本文方法 94.21 97.96 83.41
    下载: 导出CSV

    表  2  轨迹关联方法在MOT16数据集上的性能比较

    Table  2.   Performance comparison of trajectory association methods on MOT16 dateset  %

    方法 MOTA MOTP IDP IDR IDF1
    EAMTT[28] 52.5 78.8 31.5 42.4
    JDE[10] 64.4 55.8
    MOTDT[8] 47.6 50.9 40.3 50.9
    SORT[7] 33.4 72.1 54.2 29.6 26.5
    DeepSORT[6] 60.3 79.1 76.9 31.8 37.6
    本文方法 60.8 78.6 78.2 45.6 54.7
    下载: 导出CSV

    表  3  轨迹关联方法在MOT17数据集上的性能比较

    Table  3.   Performance comparison of trajectory association methods on MOT17 dataset  %

    方法 MOTA MOTP IDP IDR IDF1
    FairMOT[12] 67.5 84.7 75.9 45.3 69.8
    Tracktor[11] 53.5 52.3
    SORT[7] 31.6 69.8 45.2 37.5 43.2
    DeepSORT[6] 44.1 83.2 66.6 43.0 52.3
    本文方法 57.6 79.7 75.8 47.2 53.5
    下载: 导出CSV
  • [1] GHEISSARI N, SEBASTIAN T B, HARTLEY R. Person reidentification using spatiotemporal appearance[C]//2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2006: 1528-1535.
    [2] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. doi: 10.1109/TPAMI.2016.2577031
    [3] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2016: 779-788.
    [4] RISTANI E, SOLERA F, ZOU R, et al. Performance measures and a data set for multi-target, multi-camera tracking[C]//European Conference on Computer Vision. Berlin: Springer, 2016: 17-35.
    [5] LEAL-TAIXE L, CANTON-FERRER C, SCHINDLER K. Learning by tracking: Siamese CNN for robust target association[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE Press, 2016: 418-425.
    [6] WOJKE N, BEWLEY A, PAULUS D. Simple online and realtime tracking with a deep association metric[C]//2017 IEEE International Conference on Image Processing. Piscataway: IEEE Press, 2017: 3645-3649.
    [7] BEWLEY A, GE Z Y, OTT L, et al. Simple online and realtime tracking[C]//2016 IEEE International Conference on Image Processing. Piscataway: IEEE Press, 2016: 3464-3468.
    [8] CHEN L, AI H Z, ZHUANG Z J, et al. Real-time multiple people tracking with deeply learned candidate selection and person re-identification[C]//2018 IEEE International Conference on Multimedia and Expo. Piscataway: IEEE Press, 2018: 1-6.
    [9] RISTANI E, TOMASI C. Features for multi-target multi-camera tracking and re-identification[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2018: 6036-6046.
    [10] WANG Z D, ZHENG L, LIU Y X, et al. Towards real-time multi-object tracking[C]//European Conference on Computer Vision. Berlin: Springer, 2020: 107-122.
    [11] BERGMANN P, MEINHARDT T, LEAL-TAIXÉ L. Tracking without bells and whistles[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway: IEEE Press, 2019: 941-951.
    [12] ZHAN Y, WANG C, WANG X, et al. A simple baseline for multi-object tracking[C]//2020 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2020.
    [13] HOU Y, ZHENG L, WANG Z, et al. Locally aware apprearance metric for multi-target multi-camera tracking[C]//2020 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2020.
    [14] SCHROFF F, KALENICHENKO D, PHILBIN J. FaceNet: A unified embedding for face recognition and clustering[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2015: 815-823.
    [15] HERMANS A, BEYER L, LEIBE B. In defense of the triplet loss for person re-identification[EB/OL]. (2017-03-22)[2021-02-01]. https://arxiv.org/abs/1703.07737.
    [16] LI D W, ZHANG Z, CHEN X T, et al. A richly annotated dataset for pedestrian attribute recognition[EB/OL]. (2016-03-23)[2021-02-01]. https://arxiv.org/abs/1603.07054.
    [17] ZHENG L, SHEN L Y, TIAN L, et al. Scalable person re-identification: A benchmark[C]//2015 IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2015: 1116-1124.
    [18] SUN Y F, ZHENG L, YANG Y, et al. Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline)[C]//European Conference on Computer Vision. Berlin: Springer, 2018: 480-496.
    [19] ZHANG X, LUO H, FAN X, et al. AlignedReID: Surpassing human-level performance in person re-identification[EB/OL]. (2017-11-22)[2021-02-01]. https://arxiv.org/abs/1711.08184.
    [20] ZHENG L, HUANG Y J, LU H C, et al. Pose-invariant embedding for deep person re-identification[J]. IEEE Transactions on Image Processing, 2019, 28(9): 4500-4509.
    [21] WEI L H, ZHANG S L, YAO H T, et al. GLAD: Global-local-alignment descriptor for scalable person re-identification[J]. IEEE Transactions on Multimedia, 2019, 21(4): 986-999. doi: 10.1109/TMM.2018.2870522
    [22] ZHAO H Y, TIAN M Q, SUN S Y, et al. Spindle Net: Person re-identification with human body region guided feature decomposition and fusion[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2017: 907-915.
    [23] LI W, ZHU X T, GONG S G. Harmonious attention network for person re-identification[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2018: 2285-2294.
    [24] FU Y, WEI Y C, ZHOU Y Q, et al. Horizontal pyramid matching for person re-identification[EB/OL]. (2018-11-10)[2021-02-01]. https://arxiv.org/abs/1804.05275v3.
    [25] ZHAO L M, LI X, ZHUANG Y T, et al. Deeply-learned part-aligned representations for person re-identification[C]//2017 IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2017: 3239-3248.
    [26] ZHENG L, YANG Y, HAUPTMANN A G. Person re-identification: Past, present and future[EB/OL]. (2016-10-10)[2021-02-01]. https://arxiv.org/abs/1610.02984.
    [27] MILAN A, LEAL-TAIXE L, REID I, et al. MOT16: A benchmark for multi-object tracking[EB/OL]. (2016-03-02)[2021-02-01]. https://arxiv.org/abs/1603.00831.
    [28] SANCHEZ-MATILLA R, POIESI F, CAVALLARO A. Online multi-target tracking with strong and weak detections[C]//European Conference on Computer Vision. Berlin: Springer, 2016: 84-99.
  • 加载中
图(8) / 表(3)
计量
  • 文章访问数:  311
  • HTML全文浏览量:  125
  • PDF下载量:  38
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-02-23
  • 录用日期:  2021-03-26
  • 网络出版日期:  2022-06-20
  • 整期出版日期:  2022-06-20

目录

    /

    返回文章
    返回
    常见问答