留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于事件相机的可视化及降噪算法

闫昌达 王霞 左一凡 李磊磊 陈家斌

闫昌达, 王霞, 左一凡, 等 . 基于事件相机的可视化及降噪算法[J]. 北京航空航天大学学报, 2021, 47(2): 342-350. doi: 10.13700/j.bh.1001-5965.2020.0192
引用本文: 闫昌达, 王霞, 左一凡, 等 . 基于事件相机的可视化及降噪算法[J]. 北京航空航天大学学报, 2021, 47(2): 342-350. doi: 10.13700/j.bh.1001-5965.2020.0192
YAN Changda, WANG Xia, ZUO Yifan, et al. Visualization and noise reduction algorithm based on event camera[J]. Journal of Beijing University of Aeronautics and Astronautics, 2021, 47(2): 342-350. doi: 10.13700/j.bh.1001-5965.2020.0192(in Chinese)
Citation: YAN Changda, WANG Xia, ZUO Yifan, et al. Visualization and noise reduction algorithm based on event camera[J]. Journal of Beijing University of Aeronautics and Astronautics, 2021, 47(2): 342-350. doi: 10.13700/j.bh.1001-5965.2020.0192(in Chinese)

基于事件相机的可视化及降噪算法

doi: 10.13700/j.bh.1001-5965.2020.0192
基金项目: 

国家自然科学基金 61871034

装备预先研究项目 41417070401

详细信息
    作者简介:

    闫昌达  男, 博士研究生。主要研究方向: 视觉导航

    王霞  女, 博士, 副教授, 博士生导师。主要研究方向: 光电成像技术与系统

    通讯作者:

    王霞. E-mail: angelniuniu@bit.edu.cn

  • 中图分类号: V249.32+5;TN957.52

Visualization and noise reduction algorithm based on event camera

Funds: 

National Natural Science Foundation of China 61871034

Equipment Pre-development Project 41417070401

More Information
  • 摘要:

    针对事件相机(Event Camera)输出的异步事件流信息不利于人眼观察、难以衔接应用任务且存在大量噪声的问题,介绍一种可视化及降噪算法。结合事件流能够反映场景中物体运动边缘信息的特点,利用物体运动边缘的时间和空间连续性进行降噪处理,进而利用事件数量和时间阈值双限制的方式累积事件得到事件“帧”,达到可视化、便于应用的目的。在真实数据集实验中,降噪算法可以有效处理背景噪声,在运动起始或缓慢时保存更多细节边缘事件信息,提升有效角点检测数量,可视化算法在保证帧率的同时,降低事件数量方差,提高事件“帧”信息的均匀性。实验结果证明了可视化及降噪算法的有效性。

     

  • 图 1  三种事件相机的像素结构

    Figure 1.  Pixel structure of three event cameras

    图 2  传统帧相机和事件相机输出对比

    Figure 2.  Comparison of frame-based camera and event camera output

    图 3  事件流可视化方法对比

    Figure 3.  Comparison of event stream visualization methods

    图 4  本文算法结构示意图

    Figure 4.  Structure of proposed algorithm

    图 5  噪声处理效果图(场景1)

    Figure 5.  Noise processing (Scene 1)

    图 6  角点检测效果图(场景1)

    Figure 6.  Corner detection (Scene 1)

    图 7  噪声处理效果图(场景2)

    Figure 7.  Noise processing (Scene 2)

    图 8  角点检测效果图(场景2)

    Figure 8.  Corner detection (Scene 2)

    图 9  不同参数3种可视化方法对比(场景1)

    Figure 9.  Comparison of three visualization methods with different parameters (Scene 1)

    图 10  不同参数3种可视化方法对比(场景2)

    Figure 10.  Comparison of three visualization methods with different parameters (Scene 2)

    表  1  噪声处理后事件数量统计(场景1)

    Table  1.   Counts of the number of events after noise processing (Scene 1)

    方法 降噪前事件数 去噪后事件数 事件去除率/%
    文献[8] 300 000 147 584 50.80
    文献[9] 300 000 16 853 94.38
    本文 300 000 150 390 49.87
    下载: 导出CSV

    表  2  噪声处理后事件数量统计(场景2)

    Table  2.   Counts of the number of events after noise processing (Scene 2)

    方法 降噪前事件数 降噪后事件数 事件去除率/%
    文献[8] 200 000 143 141 28.42
    文献[9] 200 000 47 770 76.12
    本文 200 000 144 784 27.61
    下载: 导出CSV

    表  3  事件“帧”帧率及事件方差统计(场景1)

    Table  3.   Frame rate and variance of events number (Scene 1)

    参数 帧率/fps 平均包含事件数 事件数量方差
    t=0.005 s 199.61 1 242.46 1 484 647.06
    t=0.01 s 99.15 2 493.04 5 904 817.08
    重叠t=0.005 s 399.21 1 244.48 1 485 470.66
    重叠t=0.01 s 198.95 2 493.03 5 901 746.16
    n=5 000 48.92 4 953.70 2 053.29
    n=7 500 31.96 7 479.11 396.15
    t=0.005 s/n=5 000 198.30 1 250.60 1 488 157.62
    t=0.01 s/n=5 000 105.67 2 340.09 3 169 479.88
    下载: 导出CSV

    表  4  事件“帧”帧率及事件方差统计(场景2)

    Table  4.   Frame rate and variance of events number (Scene 2)

    参数 帧率/fps 平均包含事件数 事件数量方差
    t=0.005 s 196.38 1 905.05 1 390 079.79
    t=0.01 s 96.88 3 810.11 5 418 706.82
    重叠t=0.00 s 392.76 1 917.66 1 398 532.31
    重叠t=0.01 s 193.76 3 860.91 5 489 364.28
    n=5 000 70.69 4 992.55 42.10
    n=7 500 47.13 7 239.2 67 523.44
    t=0.005 s/n=5 000 196.38 1 905.05 1 389 610.55
    t=0.01 s/n=5 000 107.36 3 447.23 2 262 338.62
    下载: 导出CSV
  • [1] 桑永胜, 李仁昊, 李耀仟, 等. 神经形态视觉传感器及其应用研究[J]. 物联网学报, 2019, 3(4): 63-71.

    SANG Y S, LI R H, LI Y Q, et al. Research on neuromorphic vision sensor and its applications[J]. Chinese Journal on Internet of Things, 2019, 3(4): 63-71(in Chinese).
    [2] GALLEGO G, DELBRUCK T, ORCHARD G, et al. Event-based vision: A survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 99: 1-1.
    [3] 王亭亭, 蔡志浩, 王英勋. 无人机室内视觉/惯导组合导航方法[J]. 北京航空航天大学学报, 2018, 44(1): 176-186. doi: 10.13700/j.bh.1001-5965.2016.0965

    WANG T T, CAI Z H, WANG Y X. Integrated vision/inertial navigation method of UAVs in indoor environment[J]. Journal of Beijing University of Aeronautics and Astronautics, 2018, 44(1): 176-186(in Chinese). doi: 10.13700/j.bh.1001-5965.2016.0965
    [4] VIDAL A R, REBECQ H, HORSTSCHAEFER T, et al. Ultimate SLAM Combining events, images, and IMU for robust visual SLAM in HDR and high-speed scenarios[J]. IEEE Robotics and Automation Letters, 2018, 3(2): 994-1001. doi: 10.1109/LRA.2018.2793357
    [5] MUEGGLER E, GALLEGO G, REBECQ H, et al. Continuous-time visual-inertial odometry for event cameras[J]. IEEE Transactions on Robotics, 2018, 34(6): 1425-1440. doi: 10.1109/TRO.2018.2858287
    [6] 马艳阳, 叶梓豪, 刘坤华, 等. 基于事件相机的定位与建图算法: 综述[J]. 自动化学报, 2020, 46: 1-11.

    MA Y Y, YE Z H, LIU K H, et al. Event-based visual localization and mapping algorithms: A survey[J]. Acta Automatica Sinica, 2020, 46: 1-11(in Chinese).
    [7] XIE X M, DU J, SHI G M, et al.An improved approach for visualizing dynamic vision sensor and its video denoising[C]//Proceedings of the International Conference on Video and Image Processing.New York: ACM, 2017: 176-180.
    [8] DELBRUCK T.Frame-free dynamic digital vision[C]//Proceedings of International Symposium on Secure-Life Electronics, Advanced Electronics for Quality Life and Society.2008: 21-26.
    [9] FENG Y, LV H Y, LIU H L, et al. Event density based denoising method for dynamic vision sensor[J]. Applied Sciences, 2020, 10(6): 2024. doi: 10.3390/app10062024
    [10] HUANG J, GUO M H, CHEN S S.A dynamic vision sensor with direct logarithmic output and full-frame picture-on-demand[C]//Proceedings of the 2017 IEEE International Symposium on Circuits and Systems (ISCAS).Piscataway: IEEE Press, 2017: 1-4.
    [11] MUNDA G, REINBACHER C, POCK T. Real-time intensity-image reconstruction for event cameras using manifold regularization[J]. International Journal of Computer Vision, 2018, 126(12): 1381-1393. doi: 10.1007/s11263-018-1106-2
    [12] BARDOW P, DAVISON A J, LEUTENEGGER S.Simultaneous optical flow and intensity estimation from an event camera[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Piscataway: IEEE Press, 2016: 884-892.
    [13] SCHEERLINCK C, BARNES N, MAHONY R.Continuous-time intensity estimation using event cameras[C]//Lecture Notes in Computer Science.Berlin: Springer, 2018: 308-324.
    [14] LICHTSTEINER P, POSCH C, DELBRUCK T. A 128×128120 dB 15μs latency asynchronous temporal contrast vision sensor[J]. IEEE Journal of Solid-State Circuits, 2008, 43(2): 556-576. http://ieeexplore.ieee.org/document/4444573
    [15] POSCH C, MATOLIN D, WOHLGENANNT R. A QVGA 143 dB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain CDS[J]. IEEE Journal of Solid-State Circuits, 2010, 46(1): 259-275. http://ieeexplore.ieee.org/document/5648367/references
    [16] BRANDLI C, BERNER R, YANG M, et al. A 240×180130 dB 3μs latency global shutter spatiotemporal vision sensor[J]. IEEE Journal of Solid-State Circuits, 2014, 49(10): 2333-2341. doi: 10.1109/JSSC.2014.2342715
    [17] MUEGGLER E, REBECQ H, GALLEGO G, et al. The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM[J]. International Journal of Robotics Research, 2017, 36(2): 142-149. doi: 10.1177/0278364917691115
    [18] SHI J B, TOMASI C.Good features to track[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.Piscataway: IEEE Press, 1994: 593-600.
  • 加载中
图(10) / 表(4)
计量
  • 文章访问数:  1587
  • HTML全文浏览量:  187
  • PDF下载量:  259
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-05-18
  • 录用日期:  2020-06-18
  • 网络出版日期:  2021-02-20

目录

    /

    返回文章
    返回
    常见问答