MSF-Net: A Multilevel Spatiotemporal Feature Fusion Network Combines Attention for Action Recognition  

在线阅读下载全文

作  者:Mengmeng Yan Chuang Zhang Jinqi Chu Haichao Zhang Tao Ge Suting Chen 

机构地区:[1]School of Electronic and Information Engineering,Nanjing University of Information Science and Technology,Nanjing,210044,China [2]Jiangsu Key Laboratory of Meteorological Observation and Information Processing,Nanjing,210044,China

出  处:《Computer Systems Science & Engineering》2023年第11期1433-1449,共17页计算机系统科学与工程(英文)

基  金:supported by the General Program of the National Natural Science Foundation of China (62272234);the Enterprise Cooperation Project (2022h160);the Priority Academic Program Development of Jiangsu Higher Education Institutions Project.

摘  要:An action recognition network that combines multi-level spatiotemporal feature fusion with an attention mechanism is proposed as a solution to the issues of single spatiotemporal feature scale extraction,information redundancy,and insufficient extraction of frequency domain information in channels in 3D convolutional neural networks.Firstly,based on 3D CNN,this paper designs a new multilevel spatiotemporal feature fusion(MSF)structure,which is embedded in the network model,mainly through multilevel spatiotemporal feature separation,splicing and fusion,to achieve the fusion of spatial perceptual fields and short-medium-long time series information at different scales with reduced network parameters;In the second step,a multi-frequency channel and spatiotemporal attention module(FSAM)is introduced to assign different frequency features and spatiotemporal features in the channels are assigned corresponding weights to reduce the information redundancy of the feature maps.Finally,we embed the proposed method into the R3D model,which replaced the 2D convolutional filters in the 2D Resnet with 3D convolutional filters and conduct extensive experimental validation on the small and medium-sized dataset UCF101 and the largesized dataset Kinetics-400.The findings revealed that our model increased the recognition accuracy on both datasets.Results on the UCF101 dataset,in particular,demonstrate that our model outperforms R3D in terms of a maximum recognition accuracy improvement of 7.2%while using 34.2%fewer parameters.The MSF and FSAM are migrated to another traditional 3D action recognition model named C3D for application testing.The test results based on UCF101 show that the recognition accuracy is improved by 8.9%,proving the strong generalization ability and universality of the method in this paper.

关 键 词:3D convolutional neural network action recognition MSF FSAM 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象