<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
Turn off MathJax
Article Contents
Action Recognition Model Based on Spatio-temporal Sampling Graph Convolution Network and Self-calibration Mechanism[J]. Chinese Journal of Engineering. doi: 10.13374/j.issn2095-9389.2022.12.25.002
Citation: Action Recognition Model Based on Spatio-temporal Sampling Graph Convolution Network and Self-calibration Mechanism[J]. Chinese Journal of Engineering. doi: 10.13374/j.issn2095-9389.2022.12.25.002

Action Recognition Model Based on Spatio-temporal Sampling Graph Convolution Network and Self-calibration Mechanism

doi: 10.13374/j.issn2095-9389.2022.12.25.002
  • Available Online: 2023-04-04
  • Aiming at the problem that the existing behavior recognition algorithms ignore the dependency of spatio-temporal information context and the dependency between spatio-temporal information and channels, this paper proposes a spatio-temporal sampling graph convolution network action recognition model based on self calibration mechanism. Firstly, the principles of ST-GCN and 3D-GCN, Transformer and self-attention mechanism are introduced. Secondly, a spatio-temporal sampling graph convolution network is proposed, which takes sequential multiple frames as spatio-temporal samples, and establishes local and global spatiotemporal context dependencies by constructing spatio-temporal adjacency matrix to participate in graph convolution. Then, in order to effectively establish the dependency between space-time and channels and enhance the multi-level receptive field to capture more discriminative time-domain features, a temporal self-calibrating convolution network is proposed to convolve and fuse features in two different scale spaces: one is the original scale space-time, and the other is the use of down sampling potential space-time with smaller scale. Furthermore, combining the spatio-temporal sampling map convolution network and the temporal self-calibration network, a behavior recognition model of the spatio-temporal sampling graph convolution network based on the self-calibration mechanism is constructed, and end-to-end training is carried out based on the mutil-stream network. Finally, the researches on skeleton-based action recognition are carried on NTU-RGB+D and NTU-RGB+D120 datasets. The results further verify the effective extraction ability and excellent recognition accuracy of the action recognition model for spatio-temporal features.

     

  • loading
  • 加載中

Catalog

    通訊作者: 陳斌, bchen63@163.com
    • 1. 

      沈陽化工大學材料科學與工程學院 沈陽 110142

    1. 本站搜索
    2. 百度學術搜索
    3. 萬方數據庫搜索
    4. CNKI搜索
    Article views (124) PDF downloads(14) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return
    <th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
    <progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
    <th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
    <progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
    <span id="5nh9l"><noframes id="5nh9l">
    <span id="5nh9l"><noframes id="5nh9l">
    <span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
    <th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
    <progress id="5nh9l"><noframes id="5nh9l">
    259luxu-164