检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:丁悦 吴志泽 DING Yue;WU Zhize(School of Artificial Intelligence and Big Data,Hefei University,Hefei 230601,China)
机构地区:[1]合肥大学人工智能与大数据学院,合肥230601
出 处:《合肥大学学报》2024年第5期94-101,共8页Journal of Hefei University
基 金:国家自然科学基金项目“具视觉隐私保护的无监督老年人日常行为识别方法研究”(62406095);安徽省自然科学基金面上项目“基于特权信息学习的非干预式老年人日常行为识别方法研究”(2308085MF213);安徽省重点研究与开发计划“基于演化博弈的网络攻防实战演练平台设计与实现”(2022K07020011)。
摘 要:目前基于普通图卷积网络的方法主要依赖局部性的图卷积操作,限制了其对远距离关节间复杂关联的灵活捕捉能力。提出一种自注意力增强图卷积网络(Self-Attention Enhanced Graph Convolutional Network,SGNet),根据骨架数据的特性,对每个关节点的通道进行独立的全局性建模,即通道特定的全局空间建模(Channel-Specific Global Spatial Modeling,C-GSM),并行于局部空间建模(Local Spatial Modeling,LSM),以提取局部和全局的空间特征表示。在两个大型且具有挑战性的基准数据集NTU RGB+D和NTU RGB+D120上进行了广泛的实验研究。与最新相关方法的比较,SGNet表现得非常有竞争性,在NTU RGB+D X-Sub和NTU RGB+D120 X-Set上分别取得了92.9%和90.7%的最高准确率。Existing methods based on standard graph convolutional networks mainly rely on local graph convolution operations,limiting their flexibility in capturing complex long-range associations between joints.To address these issues,a Self-Attention Enhanced Graph Convolutional Network(SGNet)is proposed.Leveraging the characteristics of skeletal data,independent global modeling is performed for each channel related to key points,specifically termed Channel-Specific Global Spatial Modeling(C‑GSM).This is carried out in parallel with a local spatial modeling(LSM)to extract local and global spatial feature representations.Extensive experimental research was conducted on two large and challenging benchmark datasets,NTU RGB+D and NTU RGB+D120.SGNet demonstrated highly competitive results in comparison with the state-of-the-art methods,achieving the highest accuracy rates of 92.9%on NTU RGB+D X-Sub and 90.7%on NTU RGB+D 120 X-Set,respectively.
分 类 号:TP391.4[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.117.129.247