检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]武汉轻工大学数学与计算机学院,武汉430023
出 处:《小型微型计算机系统》2014年第9期2156-2161,共6页Journal of Chinese Computer Systems
基 金:湖北省自然科学基金项目(2009Chb008;2010CDB06603)资助;湖北省教育厅重点科研项目(D20101703)资助
摘 要:为了更好地挖掘视频数据和分析视频内容,该文提出一种基于语义概念的多模态视频场景分割算法,充分考虑视频中多模态之间的时序关联共生特性,通过相似度融合算法计算得到视频镜头间的相似度关系,将降维处理得到的低维语义空间坐标作为支持向量机的输入,构造出若干不同语义概念训练分类器,预测出每个关键帧的语义概念矢量,利用语义重叠镜头链方法对镜头进行聚类得到视频场景.实验结果表明,该方法能有效地检测视频语义概念和分割视频场景,MAP值、M值分别达到50%和83.4%.To better video data mining and analysis of video content, a multi-modality video scene segmentation algorithm with semantic concept is proposed based on SimFusion, locality preserving projections ( LPP), support vector machine ( SVM ), and semantic overlapped shot linked algorithm in this paper. Take full account of temporal associated co-occurring of multimodal media data in video, and using multi-modality subspace correlation propagation, the video shot similarity is calculated, then semantic space coordinates obtained by reducing the dimensionality is as the input into the SVM so as to construct a number of different semantic concept training classifier and predict the semantic concept vectors of video key frames, by means of semantic overlapped shot linked algorithm for video scene at the end of this paper. The authors' experiments show that the video scene can be effectively separated by the method proposed in the paper, and the MAP values, M values reached 50%, 83.4% respectively.
关 键 词:多模态 时序关联共生特性 相似度融合 支持向量机 语义概念 场景分割
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.145