<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
  • 《工程索引》(EI)刊源期刊
  • 中文核心期刊
  • 中國科技論文統計源期刊
  • 中國科學引文數據庫來源期刊

留言板

尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

姓名
郵箱
手機號碼
標題
留言內容
驗證碼

利用變分卷積推斷局部拓撲結構的圖表示方法

侯靜怡 唐宇鑫 于欣波 劉志杰

侯靜怡, 唐宇鑫, 于欣波, 劉志杰. 利用變分卷積推斷局部拓撲結構的圖表示方法[J]. 工程科學學報, 2023, 45(10): 1750-1758. doi: 10.13374/j.issn2095-9389.2022.07.24.005
引用本文: 侯靜怡, 唐宇鑫, 于欣波, 劉志杰. 利用變分卷積推斷局部拓撲結構的圖表示方法[J]. 工程科學學報, 2023, 45(10): 1750-1758. doi: 10.13374/j.issn2095-9389.2022.07.24.005
HOU Jingyi, TANG Yuxin, YU Xinbo, LIU Zhijie. Inferring local topology via variational convolution for graph representation[J]. Chinese Journal of Engineering, 2023, 45(10): 1750-1758. doi: 10.13374/j.issn2095-9389.2022.07.24.005
Citation: HOU Jingyi, TANG Yuxin, YU Xinbo, LIU Zhijie. Inferring local topology via variational convolution for graph representation[J]. Chinese Journal of Engineering, 2023, 45(10): 1750-1758. doi: 10.13374/j.issn2095-9389.2022.07.24.005

利用變分卷積推斷局部拓撲結構的圖表示方法

doi: 10.13374/j.issn2095-9389.2022.07.24.005
基金項目: 國家自然科學基金資助項目(62106021,U20A20225);北京科技大學青年教師學科交叉研究培育項目(FRF-IDRY-21-021)
詳細信息
    通訊作者:

    E-mail: liuzhijie2012@gmail.com

  • 中圖分類號: TP391.4

Inferring local topology via variational convolution for graph representation

More Information
  • 摘要: 深度學習技術的長足發展與數據算力的快速提升,極大地增加了各種結構圖神經網絡優化和實現的可行性,使得圖結構數據的表示研究工作取得極大進展。已有的圖神經網絡方法主要關注圖節點之間全局信息的傳遞,理論上可證明其強大的信息表示能力。然而,面向局部拓撲具有特殊語義的圖結構數據表示時,這些通用方法缺乏靈活的局部結構表示機制,例如化學反應中組成分子的局部結構—官能團,其通常能夠決定化學分子性質并且參與化學反應過程。進一步挖掘這些局部結構的信息對基于圖表示的各類任務都是非常重要的,為此提出一個利用變分卷積推斷局部拓撲結構的圖表示方法,不僅考慮圖節點在全局結構上的關系推理與信息傳遞,還基于變分推斷自適應地學習圖數據的局部拓撲結構,利用卷積操作對局部結構進行編碼,從而進一步提高圖神經網絡的表達能力。本文工作在多個圖結構數據集上進行實驗,實驗結果表明利用局部結構信息可以有效提升圖神經網絡在基于圖的相關任務上的性能。

     

  • 圖  1  化學反應分子式的SMILES格式轉化與分詞示意圖

    Figure  1.  Example of SMILES format transformation and tokenization of chemical reaction

    圖  2  自適應卷積模塊示意圖

    Figure  2.  Illustration of the self-adaptive convolutional module

    圖  3  USPTO數據集克級分子式官能團長度分布圖

    Figure  3.  Scale distribution of functional groups of the molecules in the USPTO gram set

    表  1  不同方法在OGBG-MolHIV的ROC-AUC值比較結果

    Table  1.   Test results of ROC-AUC on OGBG-MolHIV

    MethodROC-AUC
    EGC-M0.7818

    Transformer
    0.7058

    Local encoding
    0.7249

    Multiscale local encoding
    0.7535

    Soft-assignment local encoding
    0.7519

    Ours
    0.7839
    下載: 導出CSV

    表  2  本文與最先進方法在OGBG-MolHIV的比較結果

    Table  2.   Comparison of our method and the state-of-the-art on OGBG-MolHIV

    Ref.ROC-AUC
    Zhang, et al.[37]0.7799

    Bouritsas, et al.[14]
    0.8039

    Wijesinghe, et al.[8]
    0.7972

    Ying, et al.[10]
    0.8051

    Ours (Graphormer)
    0.8189
    下載: 導出CSV

    表  3  不同方法在USPTO數據集的R2值比較結果

    Table  3.   Comparison of R2 scores on the USPTO dataset

    DatasetData splitSchwaller, et al.[32]Local encodingMultiscale local encodingSoft-assignment local encodingOurs
    SubgramRandom0.1950.1980.1960.1950.199
    Time0.1420.1460.1470.1450.150
    Smoothed0.3880.3900.3960.3970.435
    GramRandom0.1170.1180.1190.1180.121
    Time0.0950.0960.0960.0950.098
    Smoothed0.2770.2790.2850.2840.311
    下載: 導出CSV

    表  4  Buchwald-Hartwig數據集上與已有方法的平均R2值比較結果

    Table  4.   Comparison of average R2 scores on the Buchwald-Hartwig dataset

    MethodsAverage R2
    Ahneman, et al.[18]0.69
    Chuang, et al.[38]0.59
    Granda, et al.[39]0.60
    Schwaller, et al.[32]0.73
    Ours
    0.76
    下載: 導出CSV
    <th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
    <progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
    <th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
    <progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
    <span id="5nh9l"><noframes id="5nh9l">
    <span id="5nh9l"><noframes id="5nh9l">
    <span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
    <th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
    <progress id="5nh9l"><noframes id="5nh9l">
    259luxu-164
  • [1] Ying R, He R N, Chen K F, et al. Graph convolutional neural networks for web-scale recommender systems // Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. London, 2018: 974
    [2] Dai H J, Li C T, Coley C W, et al. Retrosynthesis prediction with conditional graph logic network // Advances in Neural Information Processing Systems. Vancouver, 2019: 8870
    [3] Han K, Wang Y, Guo J, et al. Vision GNN: An image is worth graph of nodes // Advances in Neural Information Processing Systems. New Orleans, 2022
    [4] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need // Advances in Neural Information Processing Systems. Long Beach, 2017: 5998
    [5] Cho K, van Merrienboer B, Gulcehre C, et al. Learning phrase representations using RNN encoder–decoder for statistical machine translation // Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, 2014: 1724
    [6] Hamilton W L, Ying R, Leskovec J. Inductive representation learning on large graphs // Advances in Neural Information Processing Systems. Long Beach, 2017: 1024
    [7] Veličković P, Cucurull G, Casanova A, et al. Graph attention networks // International Conference on Learning Representations. Toulon, 2018
    [8] Wijesinghe A, Wang Q. A New Perspective on “How graph neural networks go beyond Weisfeiler-Lehman?” // International Conference on Learning Representations. Online, 2022
    [9] Liu J W, Liu J W, Luo X L. Research progress in attention mechanism in deep learning. Chin J Eng, 2021, 43(11): 1499

    劉建偉, 劉俊文, 羅雄麟. 深度學習中注意力機制研究進展. 工程科學學報, 2021, 43(11):1499
    [10] Ying C X, Cai T L, Luo S J, et al. Do transformers really perform badly for graph representation? // Advances in Neural Information Processing Systems. Online, 2021: 28877
    [11] Alon U, Yahav E. On the bottleneck of graph neural networks and its practical implications[J/OL]. arXiv preprint (2020-6-9) [2022-7-24].https://arxiv.org/abs/2006.05205
    [12] Jin W G, Barzilay R, Jaakkola T. Junction tree variational autoencoder for molecular graph generation // International Conference on Machine Learning. Stockholm, 2018: 2323
    [13] Chen Z D, Chen L, Villar S, et al. Can graph neural networks count substructures? // Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, 2020: 10383
    [14] Bouritsas G, Frasca F, Zafeiriou S, et al. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Trans Pattern Anal Mach Intell, 2023, 45(1): 657 doi: 10.1109/TPAMI.2022.3154319
    [15] Yu H, Zhao S Y, Shi J Y. STNN-DDI: A substructure-aware tensor neural network to predict drug-drug interactions. Brief Bioinform, 2022, 23(4): bbac209 doi: 10.1093/bib/bbac209
    [16] Hu W, Fey M, Zitnik M, et al. Open graph benchmark: Datasets for machine learning on graphs // Advances in Neural Information Processing Systems. Online, 2020
    [17] Lowe D. Chemical reactions from US patents (1976-Sep2016) [J/OL]. Figshare (2017-6-14) [2022-7-24]. https://doi.org/10.6084/m9.figshare.5104873.v1
    [18] Ahneman D T, Estrada J G, Lin S, et al. Predicting reaction performance in C-N cross-coupling using machine learning. Science, 2018, 360(6385): 186 doi: 10.1126/science.aar5169
    [19] Wu F, Fan A, Baevski A, et al. Pay less attention with lightweight and dynamic convolutions // International Conference on Learning Representations. New Orleans, 2019
    [20] Wu Z H, Liu Z J, Lin J, et al. Lite transformer with long-short range attention[J/OL]. arXiv preprint (2020-4-24) [2022-7-24].https://arxiv.org/abs/2004.11886
    [21] Gulati A, Qin J, Chiu C C, et al. Conformer: Convolution-augmented transformer for speech recognition // Interspeech Conference. Shanghai, 2020: 5036
    [22] Wang Y Q, Xu Z L, Wang X L, et al. End-to-end video instance segmentation with transformers // IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, 2021: 8737
    [23] Wu H P, Xiao B, Codella N, et al. CvT: introducing convolutions to vision transformers // IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, 2022: 22
    [24] Si C Y, Yu W H, Zhou P, et al. Inception transformer[J/OL]. arXiv preprint (2022-5-25) [2022-7-24].https://arxiv.org/abs/2205.12956
    [25] Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision // IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, 2016: 2818
    [26] Szegedy C, Ioffe S, Vanhoucke V, et al. Inception-v4, inception-resnet and the impact of residual connections on learning // Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. San Francisco, 2017: 4278
    [27] Zhou B L, Andonian A, Oliva A, et al. Temporal relational reasoning in videos // Proceedings of the European Conference on Computer Vision. Munich, 2018: 803
    [28] Kim Y. Convolutional neural networks for sentence classification[J/OL]. arXiv preprint (2014-8-25) [2022-7-24]. https://arxiv.org/abs/1408.5882
    [29] Kingma D P, Welling M. Auto-encoding variational bayes // International Conference on Learning Representations. Banff, 2014: 1
    [30] Rezende D J, Mohamed S, Wierstra D. Stochastic backpropagation and approximate inference in deep generative models // International Conference on Machine Learning. Beijing, 2014: 1278
    [31] Maddison C J, Mnih A, Teh Y W. The concrete distribution: A continuous relaxation of discrete random variables[J/OL]. arXiv preprint (2016-11-2) [2022-7-24].https://arxiv.org/abs/1611.00712
    [32] Schwaller P, Vaucher A C, Laino T, et al. Prediction of chemical reaction yields using deep learning. Mach Learn:Sci Technol, 2021, 2(1): 015016 doi: 10.1088/2632-2153/abc81d
    [33] Landrum G. Rdkit documentation[J/OL]. Rdkit (2012-12-1) [2022-7-24]. http://www.rdkit.org/RDKit_Docs.2012_12_1.pdf
    [34] Schwaller P, Laino T, Gaudin T, et al. Molecular transformer: A model for uncertainty-calibrated chemical reaction prediction. ACS Cent Sci, 2019, 5(9): 1572 doi: 10.1021/acscentsci.9b00576
    [35] Schwaller P, Probst D, Vaucher A C, et al. Mapping the space of chemical reactions using attention-based neural networks. Nat Mach Intell, 2021, 3(2): 144 doi: 10.1038/s42256-020-00284-w
    [36] Tailor S A, Opolka F L, Liò P, et al. Do we need anisotropic graph neural networks? [J/OL]. arXiv preprint (2021-4-3) [2022-7-24].https://arxiv.org/abs/2104.01481
    [37] Zhang M, Li P. Nested graph neural networks // Advances in Neural Information Processing Systems. Online, 2021: 15734
    [38] Chuang K V, Keiser M J. Comment on “Predicting reaction performance in C–N cross-coupling using machine learning”. Science, 2018, 362(6416): 186
    [39] Sandfort F, Strieth-Kalthoff F, Kühnemund M, et al. A structure-based platform for predicting chemical reactivity. Chem, 2020, 6(6): 1379 doi: 10.1016/j.chempr.2020.02.017
  • 加載中
圖(3) / 表(4)
計量
  • 文章訪問數:  131
  • HTML全文瀏覽量:  62
  • PDF下載量:  16
  • 被引次數: 0
出版歷程
  • 收稿日期:  2022-07-24
  • 網絡出版日期:  2023-05-27
  • 刊出日期:  2023-10-25

目錄

    /

    返回文章
    返回