基于姿势字典学习的人体行为识别  被引量:9

Human Action Recognition by Leaning Pose Dictionary

在线阅读下载全文

作  者:蔡加欣[1,2] 冯国灿[1,2] 汤鑫[1,2] 罗志宏[3] 

机构地区:[1]中山大学数学与计算科学学院,广东广州510275 [2]广东省计算科学重点实验室,广东广州510275 [3]中山大学信息科学与技术学院,广东广州510275

出  处:《光学学报》2014年第12期173-184,共12页Acta Optica Sinica

基  金:国家自然科学基金(61272338)

摘  要:提出一种基于人体轮廓表达的姿势学习框架来进行人体行为识别。通过一种基于Procrustes形状分析和局部保持投影的姿势特征表示方法,从人体运动视频中提取具有平移、旋转和放缩不变性的姿势特征,在保留人体姿势的局部流形结构的同时尽量提取子空间上的判别信息。针对该特征还提出了一种基于姿势字典学习的人体行为识别框架,对每类行为分别学习一个对应于该类的字典,通过串联所有类的字典来得到整个姿势字典;并通过最小重构误差准则来分类测试视频。在Weizmann和MuHAVi-MAS14数据集上的实验结果证实了该方法的识别率高于大部分经典方法。特别是在MuHAVi-MAS14数据集上的识别率对比已有的结果上有巨大的提升。A framework for human action recognition by learning pose dictionary based on human contour representation is proposed. A new pose feature based on Procrustes analysis and local preserving projection is proposed, which can extract shape information from human motion video which is invariant to translation, scaling and rotation. Moreover, it can extract discriminative subspace information when preserving local manifold structure of human pose. After the pose feature is extracted, a human action recognition framework based on pose dictionary learning is proposed. Class-specific dictionaries are trained individually on all training frames of each class to build the whole pose dictionary by concatenating all class-specific dictionaries. The test video is classified with the minimum reconstruction error on the learned dictionary. Experimental results on Weizmann and MuHAVi-MAS14 dataset demonstrate proposed method outperforms most classical methods. Especially, classification rate of this method on MuHAVi-MAS14 dataset achieves a considerable boost compared with that of state-of-the-art approaches.

关 键 词:图像处理 行为识别 Procrustes形状分析 局部保持投影 稀疏表示 字典学习 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象