留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于多级联合的图池化方法

董晓龙 黄俊 秦锋 洪旭东

董晓龙,黄俊,秦锋,等. 基于多级联合的图池化方法[J]. 北京航空航天大学学报,2024,50(2):559-568 doi: 10.13700/j.bh.1001-5965.2022.0386
引用本文: 董晓龙,黄俊,秦锋,等. 基于多级联合的图池化方法[J]. 北京航空航天大学学报,2024,50(2):559-568 doi: 10.13700/j.bh.1001-5965.2022.0386
DONG X L,HUANG J,QIN F,et al. Graph pooling method based on multilevel union[J]. Journal of Beijing University of Aeronautics and Astronautics,2024,50(2):559-568 (in Chinese) doi: 10.13700/j.bh.1001-5965.2022.0386
Citation: DONG X L,HUANG J,QIN F,et al. Graph pooling method based on multilevel union[J]. Journal of Beijing University of Aeronautics and Astronautics,2024,50(2):559-568 (in Chinese) doi: 10.13700/j.bh.1001-5965.2022.0386

基于多级联合的图池化方法

doi: 10.13700/j.bh.1001-5965.2022.0386
基金项目: 国家自然科学基金(61806005);安徽高校协同创新项目(GXXT-2020-012);安徽省教育委员会自然科学基金(KJ2021A0372,KJ2019A0064)
详细信息
    通讯作者:

    E-mail:huangjun.cs@ahut.edu.cn

  • 中图分类号: TP37;TP183

Graph pooling method based on multilevel union

Funds: National Natural Science Foundation of China (61806005); The University Synergy Innovation Program of Anhui Province (GXXT-2020-012); Natural Science Foundation of the Educational Commission of Anhui Province (KJ2021A0372,KJ2019A0064)
More Information
  • 摘要:

    图池化方法已经在生物信息学、化学、社交网络、推荐系统等多个领域中得到广泛应用,但关于图池化方法大多没有很好的解决节点选择问题和池化带来的节点信息丢失问题。对此提出一种新的多级联合图池化(MUPool)方法。所提方法使用多视角模块从多个视角获取节点的特征,即通过多个卷积模块提取不同的特征。同时提出多级联合模块(级联),将不同池化层的输出串联,每一层都可以融合以往所有层的信息。提出使用后端融合模块,针对每个池化层建立一个分类器,对预测结果进行融合得到最终分类结果。所提方法在多个数据集上进行实验,准确度平均提高1.62%,所提方法可以与现有的分层池化方法相结合,结合后的方法准确度平均提高2.45%。

     

  • 图 1  MUPool方法的网络框架

    Figure 1.  Network framework of MUPool method

    图 2  级联模块示意图

    Figure 2.  Schematic diagram of multilevel union module

    图 3  插件实验

    Figure 3.  Plug-in experiment

    图 4  在PROTEINS[41]数据集上池化率对显存的影响

    Figure 4.  Effect of pooling rate on video memory in PROTEINS[41] dataset

    图 5  在Mutagenicity[43]数据集上池化率对显存的影响

    Figure 5.  Effect of pooling rate on video memory in Mutagenicity[43] dataset

    图 6  MUPool方法与对比方法运行时间对比

    Figure 6.  Comparison of running time between MUPool and comparison method

    图 7  不同池化率在不同数据集上对应的准确度

    Figure 7.  Accuracy of different pooling rates on different datasets

    图 8  不同隐藏层大小在不同数据集上对应的准确度

    Figure 8.  Accuracy of different hidden size on different datasets

    表  1  数据集和参数

    Table  1.   Datasets and parameters

    数据集 图数 类别数 平均节点数 平均边数
    IMDB-B[40] 1 000 2 19.77 96.53
    IMDB-M[40] 1 500 3 13.00 65.54
    PROTEINS[41] 1 113 2 39.06 72.82
    D&D[42] 1 178 2 284.32 715.66
    Mutagenicity[43] 4 337 2 30.32 30.77
    下载: 导出CSV

    表  2  MUPool和对比方法在5个数据集上的准确度

    Table  2.   Accuracy of MUPool and comparison method on five datasets %

    方法 IMDB-B[40] IMDB-M[40] PROTEINS[41] D&D[42] Mutagenicity[43]
    Set2Set[29] 72.90 ± 0.75 50.19 ± 0.39 71.46 ± 2.17 71.94 ± 0.56 77.69 ± 0.55
    SortPool[30] 70.03 ± 0.86 47.83 ± 0.85 75.54 ± 0.94 79.37 ± 0.97
    DiffPool[10] 73.14 ± 0.70 51.31 ± 0.72 76.25 ± 0.88 80.64 ± 0.72 80.44 ± 0.82
    TopKPool[11] 71.58 ± 0.95 48.59 ± 0.72 77.68 ± 2.23 82.43 ± 0.55 79.14 ± 0.76
    SAGPool[13] 71.86 ± 0.97 76.45 ± 0.82 79.18 ± 0.82
    MinCutPool[43] 72.65 ± 0.75 51.04 ± 0.70 76.05 ± 2.60 80.08 ± 2.3
    ASAP[15] 72.81 ± 0.50 50.78 ± 0.75 74.19 ± 0.97 76.87 ± 0.70 80.12 ± 0.88
    EdgePool[34] 72.46 ± 0.74 50.79 ± 0.59 72.50 ± 3.20 75.85 ± 0.58
    CGIPool[37] 72.40 ± 0.87 51.45 ± 0.65 74.10 ± 2.31 73.11 ± 0.93 80.65 ± 0.79
    GMT[44] 73.48 ± 0.76 50.66 ± 0.82 75.09 ± 0.59 78.72 ± 0.59
    MUPool 76.05 ± 1.31 53.17 ± 0.62 81.65 ± 0.81 86.60 ± 0.73 81.55 ± 1.16
    下载: 导出CSV

    表  3  MUPool方法去除某个模块后实验的准确度

    Table  3.   Accuracy of MUPool method’s experiment after removing a module %

    方法 IMDB-B[40] IMDB-M[40] PROTEINS[41] D&D[42] Mutagenicity[43]
    MUPool 76.05 53.17 81.65 86.60 81.55
    缺少多视角模块 74.40 49.50 77.78 80.66 77.74
    缺少级联模块 70.65 48.23 75.54 78.51 73.83
    缺少后端融合模块 71.51 49.57 76.75 80.36 76.08
    下载: 导出CSV

    表  4  原始方法和插入M&L模块的分层池化方法的准确度对比

    Table  4.   Comparison of accuracy between original method and the hierarchical pooling mothod with addition of M&L module %

    方法 IMDB-B[40] IMDB-M[40] PROTEINS[41] D&D[42] Mutagenicity[43]
    SAGPool[13] 71.86 76.45 79.90
    SAGPool+M&L 82.14 80.67 78.85
    CGIPool[37] 72.4 51.45 74.10 73.11 80.65
    CGIPool+M&L 72.6 53.43 74.94 75.63 81.29
    下载: 导出CSV
  • [1] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large scale image recognition[C]//Proceedings of the 3th International Conference on Learning Representations. San Diego: ICLR, 2015.
    [2] HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2017: 2261-2269.
    [3] ZHANG X, ZHAO J B, LECUN Y. Character-level convolutional networks for text classification[C]//Proceedings of the 28th International Conference on Neural Information Processing Systems. New York: ACM, 2015: 649-657.
    [4] KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks[EB/OL]. (2017-02-22) [2022-01-06]. http://arxiv.org/abs/1609.02907.
    [5] HAMILTON W L, YING R, LESKOVEC J. Inductive representation learning on large graphs[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 1025-1035.
    [6] NAMAZI R, GHALEBI E, WILLIAMSON S, et al. SMGRL: A scalable multi-resolution graph representation learning framework [EB/OL]. (2022-01-29) [2022-02-02]. https://arxiv.org/abs/2201.12670.
    [7] DEFFERRARD M, BRESSON X, VANDERGHEYNST P. Convolutional neural networks on graphs with fast localized spectral filtering[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems. New York: ACM, 2016: 3844-3852.
    [8] RHEE S, SEO S, KIM S. Hybrid approach of relation network and localized graph convolutional filtering for breast cancer subtype classification[C]//Proceedings of the 27th International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2018: 3527-3534.
    [9] MA Y, WANG S H, AGGARWAL C C, et al. Graph convolutional networks with EigenPooling[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. New York: ACM, 2019: 723-731.
    [10] YING R, YOU J X, MORRIS C, et al. Hierarchical graph representation learning with differentiable pooling[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems. New York: ACM, 2018: 4805-4815.
    [11] GAO H Y, JI S W. Graph u-nets[C]//IEEE Transactions on Pattern Analysis and Machine Intelligence. Piscataway: IEEE Press, 2022: 4948-4960.
    [12] CANGEA C, VELIČKOVIĆ P, JOVANOVIĆ N, et al. Towards sparse hierarchical graph classifiers[EB/OL]. (2018-11-03) [2022-02-03]. https://doi.org/10.48550/arXiv.1811.01287.
    [13] LEE J, LEE I, KANG J. Self-attention graph pooling[C]//Proceedings of the International Conference on Machine Learning. New York: ACM, 2019: 3734-3743.
    [14] DIEHL F, BRUNNER T, LE M T, et al. Towards graph pooling by edge contraction[C]//Proceedings of the International Conference on Machine Learning. New York: ACM, 2019.
    [15] RANJAN E, SANYAL S, TALUKDAR P. ASAP: Adaptive structure aware pooling for learning hierarchical graph representations[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Washton, D.C.: AAAI, 2020, 34(4): 5470-5477.
    [16] BRUNA J, ZAREMBA W, SZLAM A, et al. Spectral networks and locally connected networks on graphs[EB/OL]. (2014-03-21) [2022-02-09]. https://arxiv.org/abs/1312.6203v3.
    [17] CHUNG F R K, GRAHAM F C. Spectral graph theory[M]. Providence, R.I.: American Mathematical Society, 1997.
    [18] XU B, SHEN H, CAO Q, et al. Graph wavelet neural network[C]//Proceedings of the 7th International Conference on Learning Representations. New Orleans: ICLR, 2019.
    [19] LI M, MA Z, WANG Y G, et al. Fast haar transforms for graph neural networks[J]. Neural Networks, 2020, 128: 188-198. doi: 10.1016/j.neunet.2020.04.028
    [20] MA X, WU G, KIM W H Multi-resolution graph neural network for identifying disease-specific variations in brain connectivity[EB/OL]. (2019-12-03) [2022-02-03]. https://arxiv.ORG/ABS/1912.01181.
    [21] WANG Y G, LI M, MA Z, et al. Haar graph pooling[C]//Proceedings of the 37th International Conference on Machine Learning. New York: ACM, 2020: 9952–9962.
    [22] ZHENG X, ZHOU B, WANG Y G, et al. Decimated framelet system on graphs and fast g-framelet transforms[J]. Journal of Machine Learning Research, 2022, 23(18): 1-18.
    [23] SCARSELLI F, GORI M, TSOI A C, et al. The graph neural network model[J]. IEEE Transactions on Neural Networks, 2009, 20(1): 61-80.
    [24] TARLOW Y D, BROCKSCHMIDT M, ZEMEL R. Gated graph sequence neural networks[C]//Proceedings of the 4th International Conference on Learning Representations. San Juan: ICLR, 2016.
    [25] GALLICCHIO C, MICHELI A. Fast and deep graph neural networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Washton, D.C.: AAAI, 2020, 34(4): 3898-3905.
    [26] CHEN W R, LUO C Y, WANG S K, et al. Representation learning with complete semantic description of knowledge graphs[C]//Proceedings of the 2017 International Conference on Machine Learning and Cybernetics. Piscataway: IEEE Press, 2017: 143-149.
    [27] GAO H Y, LIU Y, JI S W. Topology-aware graph pooling networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(12): 4512-4518.
    [28] ATWOOD J, TOWSLEY D. Diffusion-convolutional neural networks[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems. New York: ACM, 2016: 2001-2009.
    [29] VINYALS O, BENGIO S, KUDLUR M. Order matters: Sequence to sequence for sets[C]//Proceedings of the 4th International Conference on Learning Representations. San Juan: ICLR, 2016.
    [30] ZHANG M, CUI Z, NEUMANN M, et al. An end-to-end deep learning architecture for graph classification[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2018: 4438-4445.
    [31] ITOH T D, KUBO T, IKEDA K. Multi-level attention pooling for graph neural networks: Unifying graph representations with multiple localities[J]. Neural Networks, 2022, 145: 356-373.
    [32] ZHANG Z, BU J J, ESTER M, et al. Hierarchical multi-view graph pooling with structure learning[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(1): 545-559.
    [33] LIU N, JIAN S L, LI D S, et al. Hierarchical adaptive pooling by capturing high-order dependency for graph representation learning[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(4): 3952-3965.
    [34] DIEHL F. Edge contraction pooling for graph neural networks[EB/OL]. (2019-05-27) [2022-03-09]. https://arxiv.org/abs/1905.10990.
    [35] QIN J, LIU L, SHEN H, et al. Uniform pooling for graph networks[J]. Applied Sciences, 2020, 10(18): 6287-6301. doi: 10.3390/app10186287
    [36] TANG H T, MA G X, HE L F, et al. CommPOOL: An interpretable graph pooling framework for hierarchical graph representation learning[J]. Neural Networks, 2021, 143: 669-677. doi: 10.1016/j.neunet.2021.07.028
    [37] PANG Y S, ZHAO Y X, LI D S. Graph pooling via coarsened graph infomax[C]//Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2021: 2177-2181.
    [38] SU Z D, HU Z H, LI Y D. Hierarchical graph representation learning with local capsule pooling[C]//ACM Multimedia Asia. New York: ACM, 2021.
    [39] BACCIU D, CONTE A, GROSSI R, et al. K-plex cover pooling for graph neural networks[J]. Data Mining and Knowledge Discovery, 2021, 35(5): 2200-2220. doi: 10.1007/s10618-021-00779-z
    [40] YANARDAG P, VISHWANATHAN S V N. A structural smoothing framework for Robust graph-comparison[C]//Proceedings of the 28th International Conference on Neural Information Processing Systems. New York: ACM, 2015: 2134-2142.
    [41] FERAGEN A, KASENBURG N, PETERSEN J, et al. Scalable kernels for graphs with continuous attributes[C]//Proceedings of the 26th International Conference on Neural Information Processing Systems. New York: ACM, 2013: 216-224.
    [42] DOBSON P D, DOIG A J. Distinguishing enzyme structures from non-enzymes without alignments[J]. Journal of Molecular Biology, 2003, 330(4): 771-783. doi: 10.1016/S0022-2836(03)00628-4
    [43] KAZIUS J, MCGUIRE R, BURSI R. Derivation and validation of toxicophores for mutagenicity prediction[J]. Journal of Medicinal Chemistry, 2005, 48(1): 312-320. doi: 10.1021/jm040835a
    [44] BAEK J, KANG M, HWANG S J. Accurate learning of graph representations with graph multiset pooling[C]//Proceedings of the 9th International Conference on Learning Representations. Vienna: ICLR, 2021.
  • 加载中
图(8) / 表(4)
计量
  • 文章访问数:  650
  • HTML全文浏览量:  76
  • PDF下载量:  8
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-05-19
  • 录用日期:  2022-06-23
  • 网络出版日期:  2022-10-18
  • 整期出版日期:  2024-02-27

目录

    /

    返回文章
    返回
    常见问答