检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:朱文佳 付源梓 金强[2] 余烨[2] ZHU Wenjia;FU Yuanzi;JIN Qiang;YU Ye(Anhui Baichenghuitong Technology Co.,Ltd.,Hefei 230088,China;School of Computer and Information,Hefei University of Technology,Hefei 230601,China)
机构地区:[1]安徽百诚慧通科技有限公司,安徽合肥230088 [2]合肥工业大学计算机与信息学院,安徽合肥230601
出 处:《合肥工业大学学报(自然科学版)》2020年第2期205-210,279,共7页Journal of Hefei University of Technology:Natural Science
基 金:国家自然科学基金青年科学基金资助项目(61906061);安徽省重点研究和开发计划资助项目(201904d07020010)
摘 要:文章针对车辆型号分类中存在车辆不同视角影响的问题,提出了一个视角相关的卷积神经网络(viewing angle relative convolutional neural network,VAR-NET)模型。该模型包含视角预测和分类2个子网络,其中视角预测子网络用于提取车辆的拍摄视角信息,分类子网络用于提取车辆特征并实现其分类。在公开数据集CompCars和Standford Cars上的实验结果表明,VAR-NET模型在多视角车辆图像上取得了很好的识别效果,其识别率高于一些其他经典的网络模型。Aiming at the influence of different viewing angles of vehicles in the classification of vehicle models, this paper proposes a viewing angle relative convolutional neural network(VAR-NET). The model includes two sub-networks, one is the viewing angle prediction sub-network, which is used to extract viewing angle information of the vehicle;the other is the classification sub-network, which is used to extract vehicle features and classify them. The experimental results on public datasets CompCars and Standford Cars show that VAR-NET can achieve good recognition results on multi-view vehicle images, and its recognition accuracy is higher than that of other classical network models.
关 键 词:车型识别 卷积神经网络(CNN) 精细分类 视角预测
分 类 号:TP391.413[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.186