检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:朱斌 杨程[1] 俞春阳 安芳[1,2] Zhu Bin;Yang Cheng;Yu Chunyang;An Fang(Department of Industrial Design,Zhejiang University City College,Hangzhou 310015;Modern Industrial Design Institute,College of Computer Science and Technology,Zhejiang University,Hangzhou 310027)
机构地区:[1]浙江大学城市学院工业设计系,杭州310015 [2]浙江大学计算机科学与技术学院现代工业设计所,杭州310027
出 处:《计算机辅助设计与图形学学报》2018年第9期1778-1784,共7页Journal of Computer-Aided Design & Computer Graphics
基 金:浙江省自然科学基金(LY18E050014);国家自然科学基金(61672451)
摘 要:为了满足用户对产品的情感化需求,提出一种基于深度学习的产品意象识别方法.该方法通过语义差异法获得产品意象数据集,在此基础上,使用卷积神经网络VGGNet进行训练,建立产品意象深度模型.以典型的椅子产品为例对文中方法进行验证,训练好的产品意象深度模型识别准确率最高可达95.3%.为了进一步证明该方法的优越性,将其分别与以支持向量机(SVM)为代表的传统方法和浅层的卷积神经网络Caffe Net进行对比实验,结果表明,在识别准确率上该方法比SVM提高了约5%,比Caffe Net提升了4%~10%.此外,为了解释深度学习的识别过程,对卷积特征进行了可视化,展现了特征映射从底层到高层的抽象过程.In order to satisfy the users' emotional demands for products, a method based on deep learning was proposed for product image recognition. The method obtained a product image dataset by five-point semantic difference method. VGGNet, a kind of convolutional neural network, was trained with the dataset to establish product image deep model. The typical product of chair was applied to train and verify the product image deep model, which achieved the accuracy up to 93.33%. Furthermore, to prove the superiority of the method, it was compared with the traditional methods using support vector machine (SVM) and shallower convolutional neural network such as CaffeNet. The result shows that the proposed method achieves the accuracy about 5% better than SVM and about 4%-10% than CaffeNet. In addition, convolved features were visualized for explaining the recog- nition progress, demonstrating the abstract progress from low-level to high-level of feature mapping.
关 键 词:产品意象 深度学习 自学习特征 VGGNet 卷积操作
分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.117