Zhang Deyuan, Wu Bo, Li Guangjun, et al. Design of Flexural Vibration Holder for Vibration Cutting[J]. Journal of Beijing University of Aeronautics and Astronautics, 1999, 25(4): 467-470. (in Chinese)
Citation: JI L N,GUO X M,YANG F B. Adaptive layered fusion algorithm for infrared and visible video based on possibility theory[J]. Journal of Beijing University of Aeronautics and Astronautics,2024,50(10):3021-3031 (in Chinese) doi: 10.13700/j.bh.1001-5965.2022.0765

Adaptive layered fusion algorithm for infrared and visible video based on possibility theory

doi: 10.13700/j.bh.1001-5965.2022.0765
Funds:  National Natural Science Foundation of China (61702465); Fundamental Research Program of Shanxi Province (201901D211238,202203021221104); Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi (2020L0264); Postgraduate Science and Technology Project of the North University of China (2022180501)
More Information
  • Corresponding author: E-mail:jlnnuc@163.com
  • Received Date: 07 Sep 2022
  • Accepted Date: 19 Nov 2022
  • Available Online: 16 Dec 2022
  • Publish Date: 14 Dec 2022
  • The current infrared and visible video fusion model cannot dynamically adjust the fusion strategy according to the difference between videos, resulting in poor fusion effect or even failure. To address this issue,an adaptive layered fusion algorithm for infrared and visible video based on possibility theory was proposed. First, the magnitudes of various difference features of the region of interest in each frame of the video sequence were calculated, and the main difference features corresponding to each frame were obtained. Secondly, a layered fusion framework was built to determine the variables of each layer. The fusion effectiveness of different variables for each difference feature was calculated based on cosine similarity, and the possibility theory was used to construct the corresponding fusion effectiveness distribution.Then, the fusion effect of different variables for various difference features was analyzed layer by layer, and the optimal variable of each layer was selected. Finally, the adaptive layered fusion of infrared and visible video was realized through the optimal combination of variables. The experimental results show that the method in this paper has achieved remarkable fusion results in preserving typical infrared targets and visible structural details, and it is superior to other single fusion methods in quantitative analysis and qualitative evaluation.

     

  • [1]
    赵志刚, 王鑫, 彭廷海, 等. 国外中长波双波段红外成像技术的发展及应用[J]. 红外技术, 2020, 42(4): 312-319. doi: 10.3724/SP.J.7101502294

    ZHAO Z G, WANG X, PENG T H, et al. Status quo and application of middle and long wave dual-band infrared imaging technologies in occident[J]. Infrared Technology, 2020, 42(4): 312-319(in Chinese). doi: 10.3724/SP.J.7101502294
    [2]
    MA J Y, MA Y, LI C. Infrared and visible image fusion methods and applications: A survey[J]. Information Fusion, 2019, 45: 153-178.
    [3]
    DUJ, LI W S, XIAO B, et al. Union Laplacian pyramid with multiple features for medical image fusion[J]. Neurocomputing, 2016, 194: 326-339.
    [4]
    唐超影, 浦世亮, 叶鹏钊, 等. 基于卷积神经网络的低照度可见光与近红外图像融合[J]. 光学学报, 2020, 40(16): 1610001.

    TANG C Y, PU S L, YE P Z, et al. Fusion of low-illuminance visible and near-infrared images based on convolutional neural networks[J]. Acta Optica Sinica, 2020, 40(16): 1610001(in Chinese).
    [5]
    LI H F, WANG Y T, YANG Z, et al. Discriminative dictionary learning-based multiple component decomposition for detail-preserving noisy image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 69(4): 1082-1102.
    [6]
    王文卿, 马笑, 刘涵. 基于联合低秩稀疏分解的红外与可见光图像融合[J]. 信号处理, 2021, 37(9): 1770-1780.

    WANG W Q, MA X, LIU H. Infrared and visible image fusion via joint low-rank and sparse decomposition[J]. Journal of Signal Processing, 2021, 37(9): 1770-1780(in Chinese).
    [7]
    HU H M, WU J W, LI B, et al. An adaptive fusion algorithm for visible and infrared videos based on entropy and the cumulative distribution of gray levels[J]. IEEE Transactions on Multimedia, 2017, 19(12): 2706-2719. doi: 10.1109/TMM.2017.2711422
    [8]
    ZHANG Q, WANG Y B, LEVINE M D, et al. Multisensor video fusion based on higher order singular value decomposition[J]. Information Fusion, 2015, 24: 54-71. doi: 10.1016/j.inffus.2014.09.008
    [9]
    ZHANG Q, WANG L, MA Z K, et al. A novel video fusion framework using surfacelet transform[J]. Optics Communications, 2012, 285(13-14): 3032-3041. doi: 10.1016/j.optcom.2012.02.064
    [10]
    郭小铭, 吉琳娜, 杨风暴. 基于可能性信息质量合成的双模态红外图像融合算法选取[J]. 光子学报, 2021, 50(3): 0310003.

    GUO X M, JI L N, YANG F B. Dual-mode infrared image fusion algorithm selection based on possibility information quality synthesis[J]. Acta Photonica Sinica, 2021, 50(3): 0310003(in Chinese).
    [11]
    JI L N, YANG F B, GUO X M. Set-valued mapping cloud model and its application for fusion algorithm selection of dual mode infrared images[J]. IEEE Access, 2021, 9: 54338-54349.
    [12]
    杨风暴. 红外偏振与光强图像的拟态融合原理和模型研究[J]. 中北大学学报(自然科学版), 2017, 38(1): 1-8.

    YANG F B. Research on theory and model of mimic fusion between infrared polarization and intensity images[J]. Journal of North University of China (Natural Science Edition), 2017, 38(1): 1-8(in Chinese).
    [13]
    张雷, 杨风暴, 吉琳娜. 差异特征指数测度的红外偏振与光强图像多算法融合[J]. 火力与指挥控制, 2018, 43(2): 49-54.

    ZHANG L, YANG F B, JI L N. Multi algorithm fusion of infrared polarization and intensity image based on difference feature exponential measure[J]. Fire Control & Command Control, 2018, 43(2): 49-54(in Chinese).
    [14]
    JI L N, YANG F B, GUO X M. Image fusion algorithm selection based on fusion validity distribution combination of difference features[J]. Electronics, 2021, 10(15): 1752. doi: 10.3390/electronics10151752
    [15]
    杨风暴, 吉琳娜, 王肖霞. 可能性理论及应用[M]. 北京: 科学出版社, 2019: 41-45.

    YANG F B, JI L N, WANG X X. Possibility theory and application[M]. Beijing: Science Press, 2019: 41-45(in Chinese).
    [16]
    LI S T, YANG B, HU J W. Performance comparison of different multi-resolution transforms for image fusion[J]. Information Fusion, 2011, 12(2): 74-84. doi: 10.1016/j.inffus.2010.03.002
    [17]
    HAN Y, CAI Y Z, CAO Y, et al. A new image fusion performance metric based on visual information fidelity[J]. Information Fusion, 2013, 14(2): 127-135. doi: 10.1016/j.inffus.2011.08.002
  • Relative Articles

    [1]HOU Z Q,DAI N,CHENG M J,et al. Two-branch real-time semantic segmentation algorithm based on spatial information guidance[J]. Journal of Beijing University of Aeronautics and Astronautics,2025,51(1):19-29 (in Chinese). doi: 10.13700/j.bh.1001-5965.2022.0980.
    [2]LI S C,LI M Z,SUN J A,et al. A micro expression recognition method integrating LBP and parallel attention mechanism[J]. Journal of Beijing University of Aeronautics and Astronautics,2025,51(4):1404-1414 (in Chinese). doi: 10.13700/j.bh.1001-5965.2023.0215.
    [3]WANG X L,YIN H,DING J F. Aircraft surveillance data fusion method in flight area based on Trans-Attention[J]. Journal of Beijing University of Aeronautics and Astronautics,2025,51(4):1215-1223 (in Chinese). doi: 10.13700/j.bh.1001-5965.2023.0234.
    [4]ZHENG J,HE Z H,YU X C. One-stage object detection based on adjacent feature fusion and feature decoupling[J]. Journal of Beijing University of Aeronautics and Astronautics,2025,51(4):1205-1214 (in Chinese). doi: 10.13700/j.bh.1001-5965.2023.0249.
    [5]ZHAO H L,YANG J Q. Aero-engine fault diagnosis based on fusion convolutional Transformer[J]. Journal of Beijing University of Aeronautics and Astronautics,2025,51(4):1117-1126 (in Chinese). doi: 10.13700/j.bh.1001-5965.2023.0206.
    [6]YANG Y,CHEN F. Dehazing network based on residual global contextual attention and cross-layer feature fusion[J]. Journal of Beijing University of Aeronautics and Astronautics,2025,51(4):1048-1058 (in Chinese). doi: 10.13700/j.bh.1001-5965.2023.0194.
    [7]YANG Yong, LIU Jiaxiang, HUANG Shuying, WANG Xiaozheng, XIA Yukun. Multistage fuzzy discrimination and adaptive parameter fusion strategy for infrared and visible light image fusion[J]. Journal of Beijing University of Aeronautics and Astronautics. doi: 10.13700/j.bh.1001-5965.2023.0383
    [8]LIANG Chengwu, JIANG Songqi, LIU Yalong, TIE Yun, LIU Haichang, GAO Lei, FAN Xiaowei. DPV fault detection with multi-modal UAV video and cloud platform[J]. Journal of Beijing University of Aeronautics and Astronautics. doi: 10.13700/j.bh.1001-5965.2024.0468
    [9]LIN Yishan, ZUO Jing, LU Shuhua. A multimodal sentiment analysis model based on audio and video features optimization and cross-modal Transformer[J]. Journal of Beijing University of Aeronautics and Astronautics. doi: 10.13700/j.bh.1001-5965.2024.0247
    [10]CHEN Ya-dang, ZHAO Yi-bing, WU En-hua. Robust Semi-supervised Video Object Segmentation with Dynamic Embedding[J]. Journal of Beijing University of Aeronautics and Astronautics. doi: 10.13700/j.bh.1001-5965.2023.0354
    [11]ZHAO Minghua, HUANG Xuewen, DU Shuangli, LYU Jiahao, ZHI Rui, SHI Cheng. Spatio-temporal separated transform memory networks for video anomaly detection[J]. Journal of Beijing University of Aeronautics and Astronautics. doi: 10.13700/j.bh.1001-5965.2024.0458
    [12]LU G,ZHONG T X,GENG J. A Transformer based deep conditional video compression[J]. Journal of Beijing University of Aeronautics and Astronautics,2024,50(2):442-448 (in Chinese). doi: 10.13700/j.bh.1001-5965.2022.0374.
    [13]DUAN J Z,WANG C J. Sensitivity encoding reconstruction algorithm based on multi-category dictionary learning[J]. Journal of Beijing University of Aeronautics and Astronautics,2024,50(7):2123-2132 (in Chinese). doi: 10.13700/j.bh.1001-5965.2022.0571.
    [14]YAN Yijiang, WANG Zhengyou, GAN Yu, ZHUANG Shanna, BAI Jing, ZHU Peixiang. Two-branch Video Person Re-identification by Fusing Feature Alignment and Enhancement[J]. Journal of Beijing University of Aeronautics and Astronautics. doi: 10.13700/j.bh.1001-5965.2024.0466
    [15]JIANG Xun, XU Xing, SHEN Fu-min, WANG Guo-qing, YANG Yang. Efficient Weakly-Supervised Video Moment Retrieval without Multimodal Fusion[J]. Journal of Beijing University of Aeronautics and Astronautics. doi: 10.13700/j.bh.1001-5965.2023.0379
    [16]WEN P,CHENG Y L,WANG P,et al. Ground object classification based on height-aware multi-scale graph convolution network[J]. Journal of Beijing University of Aeronautics and Astronautics,2023,49(6):1471-1478 (in Chinese). doi: 10.13700/j.bh.1001-5965.2021.0434.
    [17]SUN Ren-hui, LIU Hao, DENG Kai-lian, YAN Shuai. Window-adaptive reconstruction for low-delay video compressive sensing[J]. Journal of Beijing University of Aeronautics and Astronautics. doi: 10.13700/j.bh.1001-5965.2023.0333
    [18]LIU Yaosheng, LIAO Yurong, LIN Cunbao, LI Zhaoming, YANG Xinyan. Feature-fusion and anti-occlusion based target tracking method for satellite videos[J]. Journal of Beijing University of Aeronautics and Astronautics, 2022, 48(12): 2537-2547. doi: 10.13700/j.bh.1001-5965.2021.0150
    [19]YANG Jingxuan, XU Zhen. Low computational-cost multicast subgrouping for SVC streams[J]. Journal of Beijing University of Aeronautics and Astronautics, 2022, 48(7): 1278-1286. doi: 10.13700/j.bh.1001-5965.2021.0014
    [20]HU Kai, ZHAO Jian, LIU Yu, NIU Yukai, JI Gang. Images inpainting via structure guidance[J]. Journal of Beijing University of Aeronautics and Astronautics, 2022, 48(7): 1269-1277. doi: 10.13700/j.bh.1001-5965.2021.0004
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(10)  / Tables(6)

    Article Metrics

    Article views(297) PDF downloads(24) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return