留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

面向鱼眼图像的人群密度估计

杨家林 林春雨 聂浪 刘美琴 赵耀

杨家林, 林春雨, 聂浪, 等 . 面向鱼眼图像的人群密度估计[J]. 北京航空航天大学学报, 2022, 48(8): 1455-1463. doi: 10.13700/j.bh.1001-5965.2021.0520
引用本文: 杨家林, 林春雨, 聂浪, 等 . 面向鱼眼图像的人群密度估计[J]. 北京航空航天大学学报, 2022, 48(8): 1455-1463. doi: 10.13700/j.bh.1001-5965.2021.0520
YANG Jialin, LIN Chunyu, NIE Lang, et al. Crowd density estimation for fisheye images[J]. Journal of Beijing University of Aeronautics and Astronautics, 2022, 48(8): 1455-1463. doi: 10.13700/j.bh.1001-5965.2021.0520(in Chinese)
Citation: YANG Jialin, LIN Chunyu, NIE Lang, et al. Crowd density estimation for fisheye images[J]. Journal of Beijing University of Aeronautics and Astronautics, 2022, 48(8): 1455-1463. doi: 10.13700/j.bh.1001-5965.2021.0520(in Chinese)

面向鱼眼图像的人群密度估计

doi: 10.13700/j.bh.1001-5965.2021.0520
基金项目: 

国家自然科学基金 62172032

国家自然科学基金 61972028

详细信息
    通讯作者:

    林春雨, E-mail: cylin@bjtu.edu.cn

  • 中图分类号: TP391

Crowd density estimation for fisheye images

Funds: 

National Natural Science Foundation of China 62172032

National Natural Science Foundation of China 61972028

More Information
  • 摘要:

    针对传统人群密度估计方法在鱼眼图像畸变下不适用的问题,提出了一个面向鱼眼图像的人群密度估计方法,实现了在鱼眼镜头场景下对人流量的监控。在模型结构方面,引入了可变形卷积,提高了模型对鱼眼畸变的适应能力。在生成目标数据方面,利用鱼眼图像的畸变特点,基于高斯变换,对人群标注转换的密度图进行符合鱼眼畸变的分布匹配。在训练方面,对损失函数的计算进行了优化,避免了模型在训练中陷入局部最优解的问题。由于鱼眼人群计数的数据集比较匮乏,采集并标注了相应的数据集。通过主客观实验与经典方法进行了对比,所提方法在测试集中的平均绝对误差达3.78,低于对比方法,证明了面向鱼眼图像的人群密度估计方法的优越性。

     

  • 图 1  σ值对高斯分布的影响

    Figure 1.  Influence of σ on Gaussian distribution

    图 2  预测模型结构

    Figure 2.  Structure of prediction model

    图 3  方差值的变化对高斯分布的影响

    Figure 3.  Influence of covariance matrix on Gaussian distribution

    图 4  旋转效果

    Figure 4.  Effects of rotation

    图 5  变换效果

    Figure 5.  Transformation effect

    图 6  标注点与环境区域

    Figure 6.  Labeled points and environment areas

    图 7  MAE对比

    Figure 7.  Comparison of MAE

    图 8  人群密度图生成效果

    Figure 8.  Display of crowd density maps

    表  1  不同方法的最优结果

    Table  1.   The best result of different methods

    方法 MAE Bias
    D2CNet 6.18 0.44
    MCNN 12.21 0.88
    CSRNet 5.66 0.41
    Bayesian Loss 3.96 0.29
    本文方法 3.78 0.27
    下载: 导出CSV

    表  2  消融实验结果

    Table  2.   Ablation results

    对比方法 MAE Bias
    方案1 5.59 0.40
    方案2 4.22 0.30
    方案3 3.89 0.28
    对照组 3.78 0.27
    下载: 导出CSV
  • [1] DAI J F, QI H Z, XIONG Y W, et al. Deformable convolutional networks[C]//2017 IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2017: 764-773.
    [2] XUE Z C, XUE N, XIA G S, et al. Learning to calibrate straight lines for fisheye image rectification[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE Press, 2019: 1643-1651.
    [3] LIAO K, LIN C, Y ZHAO. A deep ordinal distortion estimation approach for distortion rectification[J]. IEEE Transactions on Image Processing, 2021, 30: 3362-3375. doi: 10.1109/TIP.2021.3061283
    [4] 徐佳, 杨鸿波, 宋阳, 等. 基于鱼眼摄像头的一种人脸识别技术[J]. 信息通信, 2018, 31(1): 131-132. https://www.cnki.com.cn/Article/CJFDTOTAL-HBYD201801057.htm

    XU J, YANG H B, SONG Y, et al. A face recognition technology based on fisheye camera[J]. Information & Communications, 2018, 31(1): 131-132(in Chinese). https://www.cnki.com.cn/Article/CJFDTOTAL-HBYD201801057.htm
    [5] ZHANG Y Y, ZHOU D S, CHEN S Q, et al. Single-image crowd counting via multi-column convolutional neural network[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2016: 589-597.
    [6] LI Y H, ZHANG X F, CHEN D M. CSRNet: Dilated convolutional neural networks for understanding the highly congested scenes[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Piscataway: IEEE Press, 2018: 1091-1100.
    [7] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. (2015-04-10)[2021-09-01]. https://arxiv.org/abs/1409.1556.
    [8] GUO D, LI K, ZHA Z J, et al. DADNet: Dilated-attention-deformable ConvNet for crowd counting[C]//Proceedings of the 27th ACM International Conference on Multimedia. New York: ACM, 2019: 1823-1832.
    [9] ZHANG A R, SHEN J Y, XIAO Z H, et al. Relational attention network for crowd counting[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway: IEEE Press, 2019: 6787-6796.
    [10] WANG Q, BRECKON T P. Crowd counting via segmentation guided attention networks and curriculum loss[EB/OL]. (2020-08-03)[2021-09-01]. https://arxiv.org/abs/1911.07990.
    [11] DAS S S S, RASHID S M M, ALI M E. CCCNet: An attention based deep learning framework for categorized crowd counting[EB/OL]. (2019-11-12)[2021-09-01]. https://avxiv.org/abs/1912.05765.
    [12] ZHANG A R, YUE L, SHEN J Y, et al. Attentional neural fields for crowd counting[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway: IEEE Press, 2019: 5713-5722.
    [13] GAO J Y, HAN T, WANG Q, et al. Domain-adaptive crowd counting via inter-domain features segregation and Gaussian-prior reconstruction[EB/OL]. (2019-11-08)[2021-09-01]. https://arxiv.org/abs/1912.03677.
    [14] WAN J, CHAN A. Adaptive density map generation for crowd counting[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway: IEEE Press, 2019: 1130-1139.
    [15] MA Z H, WEI X, HONG X P, et al. Bayesian loss for crowd count estimation with point supervision[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway: IEEE Press, 2019: 6141-6150.
    [16] CHENG J, XIONG H P, CAO Z G, et al. Decoupled two-stage crowd counting and beyond[J]. IEEE Transactions on Image Processing, 2021, 30: 2862-2875. doi: 10.1109/TIP.2021.3055631
  • 加载中
图(8) / 表(2)
计量
  • 文章访问数:  247
  • HTML全文浏览量:  97
  • PDF下载量:  29
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-09-06
  • 录用日期:  2021-10-01
  • 网络出版日期:  2021-11-17
  • 整期出版日期:  2022-08-20

目录

    /

    返回文章
    返回
    常见问答