<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
Volume 45 Issue 7
Jul.  2023
Turn off MathJax
Article Contents
WANG Hong-ye, QIAN Quan, WU Xing. Incremental learning of material absorption coefficient regression based on parameter penalty and experience replay[J]. Chinese Journal of Engineering, 2023, 45(7): 1225-1231. doi: 10.13374/j.issn2095-9389.2022.05.03.006
Citation: WANG Hong-ye, QIAN Quan, WU Xing. Incremental learning of material absorption coefficient regression based on parameter penalty and experience replay[J]. Chinese Journal of Engineering, 2023, 45(7): 1225-1231. doi: 10.13374/j.issn2095-9389.2022.05.03.006

Incremental learning of material absorption coefficient regression based on parameter penalty and experience replay

doi: 10.13374/j.issn2095-9389.2022.05.03.006
More Information
  • Corresponding author: E-mail: xingwu@shu.edu.cn
  • Received Date: 2022-05-03
    Available Online: 2022-09-13
  • Publish Date: 2023-07-25
  • Material data are prepared in batches and stages, and data distribution in different batches varies. However, the average accuracy of neural networks declines when learning material data by batch, resulting in great challenges to the application of artificial intelligence in the materials field. Therefore, an incremental learning framework based on parameter penalty and experience replay was applied to learn streaming data. The average accuracy decline is due to two reasons: sudden variations of model parameters and a quite homogeneous sample feature space. By analyzing the model parameter variation, a mechanism of parameter penalty was established to limit the phenomenon of model parameters fitting toward new data when the model learns new data. The penalty strength of the parameters can be dynamically adjusted according to the speed of parameter change. The faster the speed of parameter changes, the higher the penalty strength, and vice versa, the lower the penalty strength. To enhance sample diversity, experience replay methods were proposed, which train the new and old data obtained by sampling from the cache pool. At the end of each incremental task, the incremental data were sampled and used for the update of the cache pool. Specifically, random sampling was adopted for the joint training, whereas reservoir sampling was used for the update of the cache pool. Further, the proposed methods (i.e., experience replay and parameter penalty) were applied to the material absorption coefficient regression and image classification tasks, respectively. The experimental results indicate that experience replay was more effective than parameter penalty, but the best results were obtained when both methods were used. Specifically, when both methods were used, the average accuracy of the benchmark increased by 45.93% and 2.62% and reduced the average forgetting rate by 86.60% and 67.20%, respectively. A comparison with existing methods reveals that our approach is more competitive. Additionally, the effects of specific parameters on the average accuracy were analyzed for both methods. The results indicate that the average accuracy increases with the proportion of experience replay and increases and then decreases when the penalty factor increases. In general, our approach is not limited by data modalities and learning tasks and can perform incremental learning on tabular or image data, regression, or classification tasks. Further, owing to the quite flexible parameter settings, it can be adapted to different environments and tasks.

     

  • loading
  • [1]
    梁李斯, 郭文龍, 馬洪月, 等. 多孔吸聲材料吸聲性能預測及吸聲模型研究進展. 材料導報, 2022(23):1

    Liang L S, Guo W L, Ma H Y, et al. Research progress of sound absorption performance prediction and sound absorption model of porous sound-absorbing materials. Mater Rep, 2022(23): 1
    [2]
    Ciaburro G, Iannace G, Ali M, et al. An artificial neural network approach to modelling absorbent asphalts acoustic properties. J King Saud Univ Eng Sci, 2021, 33(4): 213
    [3]
    Iannace G, Ciaburro G, Trematerra A. Modelling sound absorption properties of broom fibers using artificial neural networks. Appl Acous, 2020, 163: 107239 doi: 10.1016/j.apacoust.2020.107239
    [4]
    翟婷婷, 高陽, 朱俊武. 面向流數據分類的在線學習綜述. 軟件學報, 2020, 31(4):912 doi: 10.13328/j.cnki.jos.005916

    Zhai T T, Gao Y, Zhu J W. Survey of online learning algorithms for streaming data classification. J Softw, 2020, 31(4): 912 doi: 10.13328/j.cnki.jos.005916
    [5]
    董家源, 楊小渝. 材料數據挖掘與機器學習工具的集成與優化. 數據與計算發展前沿, 2020, 2(4):105

    Dong J Y, Yang X Y. Integration and optimization of material data mining and machine learning tools. Front Data &Comput, 2020, 2(4): 105
    [6]
    Kirkpatrick J, Pascanu R, Rabinowitz N, et al. Overcoming catastrophic forgetting in neural networks. PNAS, 2017, 114(13): 3521 doi: 10.1073/pnas.1611835114
    [7]
    Mai Z D, Li R W, Jeong J, et al. Online continual learning in image classification: An empirical survey. Neurocomputing, 2022, 469: 28 doi: 10.1016/j.neucom.2021.10.021
    [8]
    Parisi G I, Kemker R, Part J L, et al. Continual lifelong learning with neural networks: A review. Neural Netw, 2019, 113: 54 doi: 10.1016/j.neunet.2019.01.012
    [9]
    Li Z Z, Hoiem D. Learning without forgetting. IEEE Trans Pattern Anal Mach Intell, 2018, 40(12): 2935 doi: 10.1109/TPAMI.2017.2773081
    [10]
    Zenke F, Poole B, Ganguli S. Continual learning through synaptic intelligence // Proceedings of the 34th International Conference on Machine Learning. Sydney, 2017: 3987
    [11]
    Chaudhry A, Dokania P K, Ajanthan T, et al. Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence // European Conference on Computer Vision. Munich, 2018: 556
    [12]
    Rebuffi S A, Kolesnikov A, Sperl G, et al. iCaRL: Incremental classifier and representation learning // Conference on Computer Vision and Pattern Recognition. Honolulu, 2017: 5533
    [13]
    Aljundi R, Caccia L, Belilovsky E, et al. Online continual learning with maximally interfered retrieval // Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver, 2019: 11872
    [14]
    Aljundi R, Lin M, Goujaud B, et al. Gradient based sample selection for online continual learning // Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver, 2019: 11817
    [15]
    Prabhu A, Torr P H S, Dokania P K. GDumb: A simple approach that questions our progress in continual learning // European Conference on Computer Vision. Glasgow, 2020: 524
    [16]
    Mallya A, Lazebnik S. PackNet: Adding multiple tasks to a single network by iterative pruning // 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 7765
    [17]
    Li X L, Zhou Y, Wu T, et al. Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting // International Conference on Machine Learning. Long Beach, 2019: 3925
    [18]
    Lange M D, Aljundi R, Masana M, et al. A continual learning survey: Defying forgetting in classification tasks. IEEE Trans Pattern Anal Mach Intell, 2022, 44(7): 3366
    [19]
    Mai Z D, Li R W, Kim H, et al. Supervised contrastive replay: Revisiting the nearest class mean classifier in online class-incremental continual learning // Conference on Computer Vision and Pattern Recognition. Online, 2021: 1177
    [20]
    Hayes T L, Cahill N D, Kanan C. Memory efficient experience replay for streaming learning // International Conference on Robotics and Automation. Montreal, 2019: 9769
    [21]
    Liu Y Y, Su Y T, Liu A N, et al. Mnemonics training: Multi-class incremental learning without forgetting // 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, 2020: 12242
    [22]
    Chaudhry A, Dokania P K, Ajanthan T, et al. Riemannian walk for incremental learning: Understanding forgetting and intransigence // Proceedings of the European Conference on Computer Vision. Munich, 2018: 556
    [23]
    Lesort T, Lomonaco V, Stoian A, et al. Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges. Inf Fusion, 2020, 58: 52 doi: 10.1016/j.inffus.2019.12.004
    [24]
    Lopez-Paz D, Ranzato M A. Gradient episodic memory for continual learning // Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, 2017: 6470
    [25]
    Aljundi R, Babiloni F, Elhoseiny M, et al. Memory aware synapses: Learning what (not) to forget // Proceedings of the European Conference on Computer Vision. Munich, 2018: 144
  • 加載中

Catalog

    通訊作者: 陳斌, bchen63@163.com
    • 1. 

      沈陽化工大學材料科學與工程學院 沈陽 110142

    1. 本站搜索
    2. 百度學術搜索
    3. 萬方數據庫搜索
    4. CNKI搜索

    Figures(3)  / Tables(1)

    Article views (286) PDF downloads(40) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return
    <th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
    <progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
    <th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
    <progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
    <span id="5nh9l"><noframes id="5nh9l">
    <span id="5nh9l"><noframes id="5nh9l">
    <span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
    <th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
    <progress id="5nh9l"><noframes id="5nh9l">
    259luxu-164