留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于显著感知与一致性约束的目标跟踪算法

国强 吴天昊 徐伟 CHORNOGORLeonid

国强,吴天昊,徐伟,等. 基于显著感知与一致性约束的目标跟踪算法[J]. 北京航空航天大学学报,2023,49(9):2244-2257 doi: 10.13700/j.bh.1001-5965.2021.0688
引用本文: 国强,吴天昊,徐伟,等. 基于显著感知与一致性约束的目标跟踪算法[J]. 北京航空航天大学学报,2023,49(9):2244-2257 doi: 10.13700/j.bh.1001-5965.2021.0688
GUO Q,WU T H,XU W,et al. Target tracking algorithm based on saliency awareness and consistency constraint[J]. Journal of Beijing University of Aeronautics and Astronautics,2023,49(9):2244-2257 (in Chinese) doi: 10.13700/j.bh.1001-5965.2021.0688
Citation: GUO Q,WU T H,XU W,et al. Target tracking algorithm based on saliency awareness and consistency constraint[J]. Journal of Beijing University of Aeronautics and Astronautics,2023,49(9):2244-2257 (in Chinese) doi: 10.13700/j.bh.1001-5965.2021.0688

基于显著感知与一致性约束的目标跟踪算法

doi: 10.13700/j.bh.1001-5965.2021.0688
基金项目: 国家重点研发计划(2018YFE0206500);国家自然科学基金(62071140);国家国际科技合作专项(2015DFR10220)
详细信息
    通讯作者:

    E-mail:guoqiang@hrbeu.edu.cn

  • 中图分类号: TP391.4

Target tracking algorithm based on saliency awareness and consistency constraint

Funds: National Key R & D Program of China (2018YFE0206500); National Natural Science Foundation of China (62071140); International Science & Technology Cooperation Program of China (2015DFR10220)
More Information
  • 摘要:

    针对空间正则相关滤波(SRDCF)算法正则权重固定和模型退化等问题,提出了一种基于显著感知与一致性约束的目标跟踪算法。提取方向梯度直方图特征、浅层特征及中层特征进行融合,提升物体外观模型的表达能力;通过显著性检测算法获得初始帧的显著感知参考权重,建立正则权重在相邻2帧之间的关联;最小化实际一致性响应与理想一致性响应之间的差异,约束一致性水平防止滤波器模板退化;提出一种动态约束策略,进一步提高跟踪器在复杂场景下的适应性。在OTB2015、TempleColor128和UAV20L公开数据集上对所提算法进行测试,实验结果表明:相比于SRDCF算法,所提算法在OTB2015数据集上距离精度提高了0.108,AUC提高了0.077,速度为22.41帧/s,实时性较好。

     

  • 图 1  初始帧显著感知参考权重建模

    Figure 1.  Initial frame saliency aware reference weight modelling

    图 2  不同目标跟踪算法空间正则权重的可视化结果

    Figure 2.  Visualization results of spatial regularization weights of different target tracking algorithms

    图 3  动态标签${{{\boldsymbol{l}}}_t}$的调整过程

    Figure 3.  Adjustment process of dynamic label ${{{\boldsymbol{l}}}_t}$

    图 4  不同算法逐帧比较的中心位置误差和重叠率

    Figure 4.  Frame-by-frame comparison of center location errors and overlap rate of different algorithms

    图 5  本文算法的整体流程

    Figure 5.  Overall flow chart of the proposed algorithm

    图 6  不同算法在OTB2015数据集上的距离精度和AUC曲线

    Figure 6.  Distance precision and AUC curves of different algorithms on OTB2015 dataset

    图 7  不同算法在TempleColor128数据集上的距离精度和AUC曲线

    Figure 7.  Distance precision and AUC curves of different algorithms on TempleColor128 dataset

    图 8  不同算法在UAV20L数据集上的距离精度和AUC曲线

    Figure 8.  Distance precision and AUC curves of different algorithms on UAV20L dataset

    图 9  部分视频仿真结果

    Figure 9.  Partial results of video simulation

    图 10  OTB2015数据集上不同评价指标的消融实验结果

    Figure 10.  Results of ablation experiments with different evalution indicators on OTB2015 dataset

    图 11  OTB2015数据集上不同组分的消融实验结果

    Figure 11.  Results of ablation experiments of different components on OTB2015 dataset

    表  1  不同算法在3个数据集上的平均跟踪速度对比

    Table  1.   Comparison of average tracking speed of different algorithms on 3 datasets 帧/s

    算法OTB2015TempleColor128UAV20L
    本文22.4120.8419.49
    BACF35.4336.7527.13
    STRCF24.2821.3119.17
    SRDCF7.608.426.04
    Staple82.8079.1673.71
    ARCF17.8016.4916.08
    AutoTrack28.4427.5223.46
    ECO_HC59.3958.6463.21
    LCT22.7825.2629.54
    HCF1.971.926.70
    HDT3.693.572.88
    下载: 导出CSV

    表  2  不同算法在OTB2015数据集中11个视频属性上的距离精度

    Table  2.   Precision rate of different algorithms on OTB2015 dataset with 11 video attributes

    算法IVSVOCDEFMBFMIPROPROVBCLR
    本文0.8810.8610.8410.8950.8670.8530.8560.8820.8700.8970.782
    BACF0.8030.7690.7300.7640.7450.7900.7920.7810.7560.8050.741
    STRCF0.8370.8400.8100.8410.8260.8020.8110.8500.7660.8720.737
    SRDCF0.7810.7430.7270.7300.7670.7690.7420.7400.6030.7750.663
    Staple0.7780.7240.7240.7470.7000.7100.7680.7380.6680.7490.610
    ARCF0.7630.7700.7370.7670.7570.7680.7850.7690.6710.7600.749
    AutoTrack0.7830.7420.7350.7350.7350.7460.7770.7660.6960.7550.773
    ECO_HC0.7750.7920.7770.7930.7700.7990.7620.8010.7640.8070.847
    LCT0.7430.6780.6780.6850.6700.6810.7810.7460.5920.7340.537
    HCF0.8300.7980.7760.7900.8040.8150.8640.8160.6770.8430.831
    HDT0.8090.7740.7440.8020.7830.7790.7990.7870.6160.7890.849
     注:黑色加粗字体代表当前属性下的最佳跟踪值。
    下载: 导出CSV

    表  3  不同算法在OTB2015数据集中11个视频属性上的成功率

    Table  3.   Success rate of different algorithms on OTB2015 dataset with 11 video attributes

    算法IVSVOCDEFMBFMIPROPROVBCLR
    本文0.6780.6390.6470.6640.6730.6510.6240.6520.6380.6820.550
    BACF0.6220.5720.5650.5710.5750.5990.5820.5780.5480.6050.532
    STRCF0.6520.6310.6140.6050.6520.6280.6020.6260.5830.6470.538
    SRDCF0.6070.5590.5540.5410.5940.5970.5410.5470.4610.5820.495
    Staple0.5930.5180.5410.5480.5400.5400.5490.5340.4760.5610.399
    ARCF0.6000.5610.5600.5830.6050.5910.5620.5590.5000.5880.512
    AutoTrack0.6040.5420.5550.5590.5850.5830.5540.5550.5340.5620.540
    ECO_HC0.6030.5920.5870.5870.6040.6180.5530.5870.5600.6030.589
    LCT0.5170.4280.4760.4810.5160.5070.5290.5050.4460.5280.299
    HCF0.5500.4850.5330.5300.5850.5700.5660.5400.4740.5850.439
    HDT0.5290.4770.5220.5400.5770.5550.5360.5280.4530.5440.463
     注:黑色加粗字体代表当前属性下的最佳跟踪值。
    下载: 导出CSV
  • [1] 孟晓燕, 段建民. 基于相关滤波的目标跟踪算法研究综述[J]. 北京工业大学学报, 2020, 46(12): 83-106. doi: 10.11936/bjutxb2019030011

    MENG X Y, DUAN J M. Advances in correlation filter-based object tracking algorithms: A review[J]. Journal of Beijing University of Technology, 2020, 46(12): 83-106(in Chinese). doi: 10.11936/bjutxb2019030011
    [2] BOLME D S, BEVERIDGE J R, DRAPER B A, et al. Visual object tracking using adaptive correlation filters[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2010: 2544-2550.
    [3] HENRIQUES J F, RUI C, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583-596. doi: 10.1109/TPAMI.2014.2345390
    [4] BERTINETTO L, VALMADRE J, GOLODETZ S, et al. Staple: Complementary learners for real-time tracking[C]//Proceedings of the International Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2016: 1401-1409.
    [5] DANELLJAN M, HAGER G, KHAN F S, et al. Convolutional features for correlation filter based visual tracking[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2015: 58-66.
    [6] 刘芳, 孙亚楠, 王洪娟, 等. 基于残差学习的自适应无人机目标跟踪算法[J]. 北京航空航天大学学报, 2020, 46(10): 1874-1882. doi: 10.13700/j.bh.1001-5965.2019.0551

    LIU F, SUN Y N, WANG H J, et al. Adaptive UAV target tracking algorithm based on residual learning[J]. Journal of Beijing University of Aeronautics and Astronautics, 2020, 46(10): 1874-1882(in Chinese). doi: 10.13700/j.bh.1001-5965.2019.0551
    [7] MA C, HUANG J B, YANG X, et al. Hierarchical convolutional features for visual tracking[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2015: 3074-3082.
    [8] DANELLJAN M, ROBINSO A, KHAN F S, et al. Beyond correlational filters: Learning continuous convolution operators for visual tracking[C]//Proceedings of the European Conference on Computer Vision. Berlin: Springer, 2016: 472-488.
    [9] DANELLJAN M, BHAT G, KHAN F S, et al. ECO: Efficient convolution operators for tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2016: 6931-6939.
    [10] QI Y, ZHANG S P, QIN L, et al. Hedged deep tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2016: 4303-4311.
    [11] GALOOGAHI H K, FAGG A, LUCEY S. Learning background-aware correlation filters for visualtracking[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2017: 1144-1152.
    [12] DANELLJAN M, HAGER G, KHAN F S, et al. Learning spatially regularized correlation filters for visual tracking[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2015: 4310-4318.
    [13] FENG W, HAN R Z, GUO Q, et al. Dynamic saliency-aware regularization for correlation filter-based object tracking[J]. IEEE Transactions on Image Processing, 2019, 28(7): 3232-3245. doi: 10.1109/TIP.2019.2895411
    [14] FU C H, YANG X X, LI F, et al. Learning consistency pursued correlation filters for real-time UAV tracking[C]//Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE Press, 2020: 1-8.
    [15] WANG N Y, SHI J P, YEUNG D Y, et al. Understanding and diagnosing visual tracking systems[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2015: 3101-3109.
    [16] WANG N, ZHOU W G, SONG Y B, et al. Real-time correlation tracking via joint model compression and transfer[J]. IEEE Transactions on Image Processing, 2020, 29: 6123-6135. doi: 10.1109/TIP.2020.2989544
    [17] HOU X D, ZHANG L Q. Saliency detection: A spectral residual approach[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2007: 1-8.
    [18] LI F, TIAN C, ZUO W M, et al. Learning spatial-temporal regularized correlation filters for visual tracking[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2018: 4904-4913.
    [19] 胡昭华, 韩庆, 李奇. 基于时间感知和自适应空间正则化的相关滤波跟踪算法[J]. 光学学报, 2020, 40(3): 0315003. doi: 10.3788/AOS202040.0315003

    HU Z H, HAN Q, LI Q. Correlation filter tracking algorithm based on temporal awareness and adaptive spatial regularization[J]. Acta Optica Sinica, 2020, 40(3): 0315003(in Chinese). doi: 10.3788/AOS202040.0315003
    [20] WANG M M, LIU Y, HUANG Z Y. Large margin object tracking with circulant feature maps[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2017: 4800-4808.
    [21] BOYD S, PARIKH N, CHU E, et al. Distributed optimization and statistical learning via the alternating direction method of multipliers[J]. Foundations and Trends in Machine Leaning, 2010, 3(1): 1-122. doi: 10.1561/2200000016
    [22] DAI K N, WANG D, LU H C, et al. Visual tracking via adaptive spatially-regularized correlation filters[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2019: 4670-4679.
    [23] WU Y, YANG J L, MING H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1834-1848. doi: 10.1109/TPAMI.2014.2388226
    [24] LIANG P P, BLASCH E, LING H B. Encoding color information for visual tracking: Algorithms and benchmark[J]. IEEE Transactions on Image Processing, 2015, 24(12): 5630-5644. doi: 10.1109/TIP.2015.2482905
    [25] MUELLER M, SMITH N, GHANEM B. A benchmark and simulator for UAV tracking[C]//Proceedings of the European Conference on Computer Vision. Berlin: Springer, 2016: 445-461.
    [26] MA C, YANG X K, ZAHNG C Y, et al. Long-term correlation tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2015: 5388-5396.
    [27] HUANG Z Y, FU C H, LI Y M, et al. Learning aberrance repressed correlation filters for real-time UAV tracking[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE Press, 2019: 2891-2900.
    [28] LI Y M, FU C H, DING F Q, et al. AutoTrack: Towards high-performance visual tracking for UAV with automatic spatio-temporal regularization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2020: 11920-11929.
  • 加载中
图(11) / 表(3)
计量
  • 文章访问数:  1668
  • HTML全文浏览量:  47
  • PDF下载量:  35
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-11-16
  • 录用日期:  2022-03-25
  • 网络出版日期:  2022-05-17
  • 整期出版日期:  2023-10-01

目录

    /

    返回文章
    返回
    常见问答