<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
Volume 45 Issue 10
Oct.  2023
Turn off MathJax
Article Contents
HOU Jingyi, TANG Yuxin, YU Xinbo, LIU Zhijie. Inferring local topology via variational convolution for graph representation[J]. Chinese Journal of Engineering, 2023, 45(10): 1750-1758. doi: 10.13374/j.issn2095-9389.2022.07.24.005
Citation: HOU Jingyi, TANG Yuxin, YU Xinbo, LIU Zhijie. Inferring local topology via variational convolution for graph representation[J]. Chinese Journal of Engineering, 2023, 45(10): 1750-1758. doi: 10.13374/j.issn2095-9389.2022.07.24.005

Inferring local topology via variational convolution for graph representation

doi: 10.13374/j.issn2095-9389.2022.07.24.005
More Information
  • Corresponding author: E-mail: liuzhijie2012@gmail.com
  • Received Date: 2022-07-24
    Available Online: 2023-05-27
  • Publish Date: 2023-10-25
  • The development of deep learning techniques and support of big data computing power have revolutionized graph representation research by facilitating the implementation of the learning of different graph neural network structures. Existing methods, such as graph attention networks, mainly focus on global information propagation in graph neural networks, which have theoretically proven their strong representation capability. However, these general methods lack flexible representation mechanisms when facing graph data with local topology involving specific semantics, such as functional groups in the chemical reaction. Accordingly, it is of great importance to further exploit the local structure representations for graph-based tasks. Several existing methods either use domain expert knowledge or conduct subgraph isomorphism counting to learn local topology representations of graphs. However, there is no guarantee that these methods can easily be generalized to different domains without specific knowledge or complex substructure preprocessing. In this study, we propose a simple and automatic local topology inference method that uses variational convolutions to improve the local representation ability of graph attention networks. The proposed method not only considers the relationship reasoning and message passing on the global graph structure but also adaptively learns the graph’s local structure representations with the guidance of statistical priors that can be readily accessible. To be more specific, the variational inference is used to adaptively learn the convolutional template size, and the inference is conducted layer-by-layer with the guidance of the statistical priors to make the convolutional template size adaptable to multiple subgraphs with different structures in a self-supervised way. The variational convolution module is easily pluggable and can be concatenated with arbitrary hidden layers of any graph neural network. In contrast, due to the locality of the convolution operations, the relations between graph nodes can be further sparse to alleviate the over-squeezing problem in the global information propagation of the graph neural network. As a result, the proposed method can significantly improve the overall representation ability of the graph attention network using the variational inference of the convolutional operations for local topology representation. Experiments are conducted on three large-scale and publicly available datasets, i.e., the OGBG-MolHIV, USPTO, and Buchwald-Hartwig datasets. Experimental results show that exploiting various kinds of local topological information helps improve the performance of the graph attention network.

     

  • loading
  • [1]
    Ying R, He R N, Chen K F, et al. Graph convolutional neural networks for web-scale recommender systems // Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. London, 2018: 974
    [2]
    Dai H J, Li C T, Coley C W, et al. Retrosynthesis prediction with conditional graph logic network // Advances in Neural Information Processing Systems. Vancouver, 2019: 8870
    [3]
    Han K, Wang Y, Guo J, et al. Vision GNN: An image is worth graph of nodes // Advances in Neural Information Processing Systems. New Orleans, 2022
    [4]
    Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need // Advances in Neural Information Processing Systems. Long Beach, 2017: 5998
    [5]
    Cho K, van Merrienboer B, Gulcehre C, et al. Learning phrase representations using RNN encoder–decoder for statistical machine translation // Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, 2014: 1724
    [6]
    Hamilton W L, Ying R, Leskovec J. Inductive representation learning on large graphs // Advances in Neural Information Processing Systems. Long Beach, 2017: 1024
    [7]
    Veličković P, Cucurull G, Casanova A, et al. Graph attention networks // International Conference on Learning Representations. Toulon, 2018
    [8]
    Wijesinghe A, Wang Q. A New Perspective on “How graph neural networks go beyond Weisfeiler-Lehman?” // International Conference on Learning Representations. Online, 2022
    [9]
    劉建偉, 劉俊文, 羅雄麟. 深度學習中注意力機制研究進展. 工程科學學報, 2021, 43(11):1499

    Liu J W, Liu J W, Luo X L. Research progress in attention mechanism in deep learning. Chin J Eng, 2021, 43(11): 1499
    [10]
    Ying C X, Cai T L, Luo S J, et al. Do transformers really perform badly for graph representation? // Advances in Neural Information Processing Systems. Online, 2021: 28877
    [11]
    Alon U, Yahav E. On the bottleneck of graph neural networks and its practical implications[J/OL]. arXiv preprint (2020-6-9) [2022-7-24].https://arxiv.org/abs/2006.05205
    [12]
    Jin W G, Barzilay R, Jaakkola T. Junction tree variational autoencoder for molecular graph generation // International Conference on Machine Learning. Stockholm, 2018: 2323
    [13]
    Chen Z D, Chen L, Villar S, et al. Can graph neural networks count substructures? // Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, 2020: 10383
    [14]
    Bouritsas G, Frasca F, Zafeiriou S, et al. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Trans Pattern Anal Mach Intell, 2023, 45(1): 657 doi: 10.1109/TPAMI.2022.3154319
    [15]
    Yu H, Zhao S Y, Shi J Y. STNN-DDI: A substructure-aware tensor neural network to predict drug-drug interactions. Brief Bioinform, 2022, 23(4): bbac209 doi: 10.1093/bib/bbac209
    [16]
    Hu W, Fey M, Zitnik M, et al. Open graph benchmark: Datasets for machine learning on graphs // Advances in Neural Information Processing Systems. Online, 2020
    [17]
    Lowe D. Chemical reactions from US patents (1976-Sep2016) [J/OL]. Figshare (2017-6-14) [2022-7-24]. https://doi.org/10.6084/m9.figshare.5104873.v1
    [18]
    Ahneman D T, Estrada J G, Lin S, et al. Predicting reaction performance in C-N cross-coupling using machine learning. Science, 2018, 360(6385): 186 doi: 10.1126/science.aar5169
    [19]
    Wu F, Fan A, Baevski A, et al. Pay less attention with lightweight and dynamic convolutions // International Conference on Learning Representations. New Orleans, 2019
    [20]
    Wu Z H, Liu Z J, Lin J, et al. Lite transformer with long-short range attention[J/OL]. arXiv preprint (2020-4-24) [2022-7-24].https://arxiv.org/abs/2004.11886
    [21]
    Gulati A, Qin J, Chiu C C, et al. Conformer: Convolution-augmented transformer for speech recognition // Interspeech Conference. Shanghai, 2020: 5036
    [22]
    Wang Y Q, Xu Z L, Wang X L, et al. End-to-end video instance segmentation with transformers // IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, 2021: 8737
    [23]
    Wu H P, Xiao B, Codella N, et al. CvT: introducing convolutions to vision transformers // IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, 2022: 22
    [24]
    Si C Y, Yu W H, Zhou P, et al. Inception transformer[J/OL]. arXiv preprint (2022-5-25) [2022-7-24].https://arxiv.org/abs/2205.12956
    [25]
    Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision // IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, 2016: 2818
    [26]
    Szegedy C, Ioffe S, Vanhoucke V, et al. Inception-v4, inception-resnet and the impact of residual connections on learning // Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. San Francisco, 2017: 4278
    [27]
    Zhou B L, Andonian A, Oliva A, et al. Temporal relational reasoning in videos // Proceedings of the European Conference on Computer Vision. Munich, 2018: 803
    [28]
    Kim Y. Convolutional neural networks for sentence classification[J/OL]. arXiv preprint (2014-8-25) [2022-7-24]. https://arxiv.org/abs/1408.5882
    [29]
    Kingma D P, Welling M. Auto-encoding variational bayes // International Conference on Learning Representations. Banff, 2014: 1
    [30]
    Rezende D J, Mohamed S, Wierstra D. Stochastic backpropagation and approximate inference in deep generative models // International Conference on Machine Learning. Beijing, 2014: 1278
    [31]
    Maddison C J, Mnih A, Teh Y W. The concrete distribution: A continuous relaxation of discrete random variables[J/OL]. arXiv preprint (2016-11-2) [2022-7-24].https://arxiv.org/abs/1611.00712
    [32]
    Schwaller P, Vaucher A C, Laino T, et al. Prediction of chemical reaction yields using deep learning. Mach Learn:Sci Technol, 2021, 2(1): 015016 doi: 10.1088/2632-2153/abc81d
    [33]
    Landrum G. Rdkit documentation[J/OL]. Rdkit (2012-12-1) [2022-7-24]. http://www.rdkit.org/RDKit_Docs.2012_12_1.pdf
    [34]
    Schwaller P, Laino T, Gaudin T, et al. Molecular transformer: A model for uncertainty-calibrated chemical reaction prediction. ACS Cent Sci, 2019, 5(9): 1572 doi: 10.1021/acscentsci.9b00576
    [35]
    Schwaller P, Probst D, Vaucher A C, et al. Mapping the space of chemical reactions using attention-based neural networks. Nat Mach Intell, 2021, 3(2): 144 doi: 10.1038/s42256-020-00284-w
    [36]
    Tailor S A, Opolka F L, Liò P, et al. Do we need anisotropic graph neural networks? [J/OL]. arXiv preprint (2021-4-3) [2022-7-24].https://arxiv.org/abs/2104.01481
    [37]
    Zhang M, Li P. Nested graph neural networks // Advances in Neural Information Processing Systems. Online, 2021: 15734
    [38]
    Chuang K V, Keiser M J. Comment on “Predicting reaction performance in C–N cross-coupling using machine learning”. Science, 2018, 362(6416): 186
    [39]
    Sandfort F, Strieth-Kalthoff F, Kühnemund M, et al. A structure-based platform for predicting chemical reactivity. Chem, 2020, 6(6): 1379 doi: 10.1016/j.chempr.2020.02.017
  • 加載中

Catalog

    通訊作者: 陳斌, bchen63@163.com
    • 1. 

      沈陽化工大學材料科學與工程學院 沈陽 110142

    1. 本站搜索
    2. 百度學術搜索
    3. 萬方數據庫搜索
    4. CNKI搜索

    Figures(3)  / Tables(4)

    Article views (100) PDF downloads(10) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return
    <th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
    <progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
    <th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
    <progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
    <span id="5nh9l"><noframes id="5nh9l">
    <span id="5nh9l"><noframes id="5nh9l">
    <span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
    <th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
    <progress id="5nh9l"><noframes id="5nh9l">
    259luxu-164