检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]安徽大学计算智能与信号处理教育部重点实验室,安徽合肥230601
出 处:《四川大学学报(工程科学版)》2016年第6期165-171,共7页Journal of Sichuan University (Engineering Science Edition)
基 金:国家自然科学基金资助项目(61172127;61401001);高等学校博士学科点专项科研基金资助项目(20113401110006);安徽省自然科学基金资助项目(1508085MF120)
摘 要:视频的有效表达是识别行为的关键与难点。提出了一种改进的特征图串的视频表达方法,在动态规划框架下,利用子模优化方法和图匹配技术实现了行为的识别。首先,利用近年来被广泛应用的时空特征点探测器获取视频序列中的关键点;接着引入子模优化方法完成视频在时域上的划分;然后在每个时域区间内以关键点为节点形成图结构,使得行为视频的特征表示转化为有序的特征图串;最后基于重加权随机游走的图匹配方法和动态时间规整实现成对视频的匹配与对齐。通过2组公开数据集(KTH和UT-interaction)上的实验及与其他方法的比较,验证了本文方法是有效的、可行的。The effective representation of the video was the key and difficulty in human action recognition. An improved string of feature graphs was proposed to describe a video,which combines submodular optimization method and graphic matching technique in the framework of dynamic programming. Firstly, space-time feature points in a video were obtained by utilizing spatio-time interest point detector, and leveraging the submodular means considering the time order divides the video into many small time intervals. Then the representation of the video can be transformed into a string graphs which are constructed based on these feature points falling in the corresponding time interval. Finally,measuring the similarity of pair of videos was implemented through using the techniques of the reweighted random walks for graph matching (RRWM) and dynamic time warping (DTW) between string graphs from two videos respectively. The comparisons of the proposed method with other methods on the two published datasets( KTH and UT-interaction)demonstrated that this algorithm is effective and feasible.
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.200