检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:何梁 薛龙[1,2] 郑建鸿 刘木华[1,2] 黎静[1,2] HE Liang;XUE Long;ZHENG Jian-hong;LIU Mu-hua;LI Jing(College of Engineering,Jiangxi Agricultural University,Nanchang 330045,China;Jiangxi Key Laboratory of Modern Agricultural Equipment,Nanchang 330045,China)
机构地区:[1]江西农业大学工学院,南昌330045 [2]江西省现代农业装备重点实验室,南昌330045
出 处:《科学技术与工程》2023年第16期6845-6852,共8页Science Technology and Engineering
基 金:江西现代农业科研协同创新专项(JXXTCX201802-02)。
摘 要:针对研发莲蓬采摘机器人遇到的采摘点识别问题,提出了一种莲蓬采摘点与采摘姿态计算方法。提出了一种基于YOLO(you only look once)与Deeplab v3+的二阶段分割网络,并通过Mobilenet v2特征提取网络对算法进行轻量化改进,最后对分割后的结果进行图像处理,进一步计算得到莲蓬采摘点及采摘姿态。将50幅原始图像进行验证试验的结果表明,算法计算成功率为88.89%,平均帧率为34.41 FPS。得到的算法能够为莲蓬自动化采摘机械提供有效信息,具有轻量化、效率高的特点,促进了计算机视觉与神经网络在现代农业的应用。Aiming at the problem of identifying the picking points of the lotus pods robot,a method of calculating the picking points and picking posture of the robot was proposed.A two-stage network based on YOLO(you only look once)and Deeplab V3+was proposed,and Mobilenet V2 feature extraction network was applied to Deeplab V3+.Finally,the segmentation results were processed by image processing,and the picking points and picking posture of lotus seeds were further calculated.The experimental results of 50 original images show that the success rate of the algorithm is 88.89%,and the average frame rate is 34.41 FPS.The obtained algorithm can provide effective information to the automatic harvesting machinery of lotus pods,which has the characteristics of lightweight and high efficiency,and promotes the application of computer vision and neural network in modern agriculture.
关 键 词:采摘机器人 深度学习 采摘点计算 图像处理 计算机视觉
分 类 号:S225[农业科学—农业机械化工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.134.92.193