检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:沈书馨 宋爱国[1] 阳雨妍 倪江生[1] SHEN Shuxin;SONG Aiguo;YANG Yuyan;NI Jiangsheng(School of Instrument Science and Engineering,Southeast University,Nanjing 210096,China)
机构地区:[1]东南大学仪器科学与工程学院,南京210096
出 处:《载人航天》2022年第2期213-222,共10页Manned Spaceflight
基 金:国家重点研发计划项目(2019YFC0119304)。
摘 要:为了解决由于视觉传感器视角单一、光线条件复杂导致的空间机械臂作业中相似目标识别较差的问题,提出一种基于CNN-GRU的视触融合目标识别系统。系统由机械臂、灵巧手和视觉传感器构成,实现了对目标物视觉和触觉信息的自主采样,并通过CNN-GRU网络提取视觉信息的空间特征和触觉信息的时序特征,有效利用多模态信息,提高目标识别的准确率。实验结果表明:在14种物品分类实验中准确率为97.8%,对比单一视觉CNN-V和触觉GRU-T网络分别提升16.3%和15.8%;同时,CNN-GRU在准确率和预测速度上明显优于传统最邻近算法和支持向量机算法。To solve the problem of poor recognition of similar targets due to the single viewing angle of the visual sensor and the complicated lighting conditions in the operation of the space manipulator,a CNN-GRU-based visual and tactile fusion target recognition system was proposed.The system was composed of the manipulator,the dexterous hand and the visual sensor.The autonomous sampling of the visual and tactile information of the target was realized and the spatial characteristics of the visual information and the temporal characteristics of the tactile information were extracted through the CNN-GRU network.The multi-modal information was effectively used to improve the accuracy of target recognition.The accuracy rate was 97.8% in the 14 kinds of item classification experiments,which was 16.2% and 15.8% higher than that of the single visual CNN-V and tactile GRU-T network respectively.Meanwhile,CNN-GRU was superior to the traditional K-Nearest Neighbor and Support Vector Machine algorithm in accuracy and prediction speed.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222