检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]南京邮电大学自动化学院,江苏南京210003
出 处:《计算机技术与发展》2015年第11期44-48,共5页Computer Technology and Development
基 金:国家自然科学基金资助项目(61272273);江苏省333工程项目(BRA2011175);南京邮电大学校科研项目(XJKY14016)
摘 要:在基于稀疏表示分类的模式识别中,字典学习可以为稀疏表示获得更为精简的数据表示。然而,字典大小是衡量识别精度和速度的重要因素,优化字典设计能同时满足这两方面的需求。文中提出了一种新的技术叫作基于竞争聚集的K奇异值字典学习方法(CA-KSVD)。该方法优化了字典的大小,并同时保证了识别的准确率。CA-KSVD将竞争聚集算法中优化簇数的原理引入K-SVD,从而提高了K-SVD的字典学习能力。优化过程从输入大量字典原子开始,逐步减少那些未充分利用或相似的原子,最后得到高性能的字典,它不再包含那些冗余的原子。Extend Yale B和AR人脸数据库上的实验结果表明了文中算法的有效性。In pattern recognition based on sparse representation classification, concise representation of date can be obtained for sparse representation via dictionary learning. However, the size of dictionary is an important tradeoff between recognition speed and accuracy, the design of optimized dictionary can satisfy requirements of two aspects simultaneously. A novel technique called the K -SVD dictionary learning algorithm based on competitive agglomeration (CA-KSVD) is proposed, which finds a dictionary with optimized size without compromising its recognition accuracy. CA-KSVD improves the K -SVD dictionary learning algorithm by introducing a mechanism to K -SVD,the mechanism in competitive agglomeration can optimize dictionary size. Optimization procedure starts with a large number of dictionary atoms and gradually reduces the under-utilized or similar atoms to produce a high-performance dictionary that has no redundant atoms. Experimental results with Extend YaleB and AR databases demonstrate the effectiveness of the method in this paper.
关 键 词:稀疏表示 字典学习 聚类 竞争聚集 K—SVD算法
分 类 号:TP301.6[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.4