北京航空航天大学学报 ›› 2016, Vol. 42 ›› Issue (10): 2146-2154.doi: 10.13700/j.bh.1001-5965.2015.0644

• 论文 • 上一篇    下一篇

基于权值分配及多特征表示的在线多示例学习跟踪

杨红红, 曲仕茹, 米秀秀   

  1. 西北工业大学 自动化学院, 西安 710072
  • 收稿日期:2015-09-30 出版日期:2016-10-20 发布日期:2016-03-15
  • 通讯作者: 曲仕茹,Tel.:029-88431386,E-mail:qushiru@nwpu.edu.cn E-mail:qushiru@nwpu.edu.cn
  • 作者简介:杨红红,女,博士研究生。主要研究方向:目标检测与跟踪。Tel.:029-88431386,E-mail:yanghonghong0615@163.com;曲仕茹,女,博士,教授。主要研究方向:智能系统与信息工程。Tel.:029-88431386,E-mail:qushiru@nwpu.edu.cn;米秀秀,女,硕士研究生。主要研究方向:特征提取与目标跟踪。Tel.:029-88431386,E-mail:mixiuxiu@mail.nwpu.edu.cn
  • 基金资助:
    航空科学基金(2012ZC53043);高等学校博士学科点专项科研基金(20096102110027);航天科技创新基金(CASC201104)

Tracking approach based on online multiple instance learning with weight distribution and multiple feature representation

YANG Honghong, QU Shiru, MI Xiuxiu   

  1. School of Automation, Northwestern Polytechnical University, Xi'an 710072, China
  • Received:2015-09-30 Online:2016-10-20 Published:2016-03-15
  • Supported by:
    Aeronautical Science Foundation of China (2012ZC53043); Specialized Research Fund for the Doctoral Program of Higher Education of China (20096102110027); Astronautic Science and Technology Innovation Foundation (CASC201104)

摘要: 针对复杂环境下目标跟踪过程中由于遮挡、目标姿势及光照条件变化引起跟踪漂移的问题,提出一种基于多示例学习(MIL)框架的在线视觉目标跟踪算法。该算法针对多示例跟踪算法采用单一haar-like特征不能准确描述目标外观变化及在学习过程中对样本包中各正负样本示例采用相同权值,忽略不同正负样本示例在学习过程中对包的重要性不同的特点,采用多特征联合表示目标外观构造分类器,通过将多特征互补特性融入在线多示例学习过程中,利用多特征的互补属性建立准确的目标外观模型,克服在线多示例跟踪算法对目标外观变化描述不足的问题;同时,依据不同正负样本示例对样本包的重要程度进行权值分配,提高跟踪精度。实验结果表明,本文跟踪算法对场景光线剧烈变化、遮挡、尺度变化及平面旋转等干扰具有较强的跟踪鲁棒性,通过对不同视频序列进行测试,文中算法在5组测试视频序列上的平均中心位置误差远小于对比增量式学习跟踪,仅为10.14像素,其对比算法IVT、MIL和OAB的中心位置误差分别为17.99、20.29和33.64像素。

关键词: 多示例学习, 多特征联合表示, 权值分配, 目标跟踪, 分类器

Abstract: When most existing tracking algorithms are used, target drift problem is easy to occur under a complex environment such as occlusion, pose and illumination change. This paper proposes an online visual target tracking algorithm based on the framework of multiple instance learning (MIL) tracking. The MIL tracking algorithm cannot describe the target appearance accurately because it only uses single haar-like feature, adopts the same weight during the process of learning sample packages which contain positive samples and negative samples, and ignores the characteristic of different positive samples and negative samples having different importance to the sample bags. Therefore, this paper combines the multiple features to represent the target, constructs the classifiers, integrates the complementary characteristic of multiple features to the MIL online learning process, exploits the characteristics of complementary properties to establish more accurate target appearance model, and overcomes the problem of MIL tracking algorithm that it is insufficient to describe the target appearance. Simultaneously, the weights are assigned based on the importance of different positive samples and negative samples to the sample bags, and the tracking precision is improved. The experimental results show that the proposed algorithm can effectively handle video scene occlusions, illumination changes and scale changes with high accuracy and strong robustness. Compared with incremental learning of visual tracing (IVT), MIL and online AdaBoost (OAB) tracking algorithms, through the different challenging video sequences, the average center position error of the proposed algorithm in 5 groups of test videos is far smaller than the other three algorithms, which is only 10.14 pixel, while those of IVT, MIL and OAB algorithms are 17.99, 20.29 and 33.64 pixel, respectively.

Key words: multiple instance learning, joint multiple feature representation, weight distribution, target tracking, classifier

中图分类号: 


版权所有 © 《北京航空航天大学学报》编辑部
通讯地址:北京市海淀区学院路37号 北京航空航天大学学报编辑部 邮编:100191 E-mail:jbuaa@buaa.edu.cn
本系统由北京玛格泰克科技发展有限公司设计开发