检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:张士豪 沈磊 宋利杰 韩腾飞 宋育阳[4] 房玉林[4] 苏宝峰[1,2,3] ZHANG Shihao;SHEN Lei;SONG Lijie;HAN Tengfei;SONG Yuyang;FANG Yulin;SU Baofeng(College of Mechanical and Electronic Engineering,Northwest A&F University,Yangling 712100,China;Key Laboratory of Agricultural Internet of Things,Ministry of Agriculture and Rural Affairs,Yangling 712100,China;Shaanxi Key Laboratory of Agriculture Information Perception and Intelligent Service,Yangling 712100,China;College of Enology,Northwest A&F University,Yangling 712100,China)
机构地区:[1]西北农林科技大学机械与电子工程学院,杨凌712100 [2]农业农村部农业物联网重点实验室,杨凌712100 [3]陕西省农业信息感知与智能服务重点实验室,杨凌712100 [4]西北农林科技大学葡萄酒学院,杨凌712100
出 处:《农业工程学报》2023年第21期172-180,共9页Transactions of the Chinese Society of Agricultural Engineering
基 金:中国宁夏回族自治区重点研发计划(2021BEF02017)。
摘 要:为解决复杂背景下复芽精确识别与定位的问题,该研究提出一种基于远近组合视觉法的并生芽与副芽视觉识别方法,通过在视觉映射平面应用欧几里得距离算法,于远景实现并生芽的检测,在近景并生芽映射区域内实现副芽的识别,提高了田间小目标物体的识别准确率。对60个复芽样本的测试结果表明,该方法的总体平均置信度为0.905,平均检测时间为18.1 ms;其次,提出一种田间复芽的在线检测定位法,通过对同一复芽进行连续5帧在线检测与定位,实时删除和更新初始帧与最新帧图像数据,直到连续5帧定位坐标相对误差小于误差阈值,取得复芽的平均定位坐标,提高自然背景下复芽的定位精度与复杂光环境下的抗干扰能力,测试结果表明,该方法对并生芽与副芽的定位精度为±0.916和±0.654 mm,其重复性和精度满足副芽抹除需求,整个系统可靠有效,能够为抹芽机器人在线定位并生芽和副芽以进行抹芽作业提供高质量的技术支持,为果园自动化抹芽的实现奠定了基础。Efficient and accurate identification and position of compound buds can greatly contribute to the orchard robots for automatic bud removal operations,thereby improving the efficiency of flower and fruit thinning.This study aims to accurately identify and position the small objects in complex backgrounds for grape bud removal in the field.(1)A visual recognition system was proposed for the concurrent bud and secondary bud using a combination of far and near visual.Euclidean distance was applied on the visual mapping plane.The concurrent bud was then identified in the far view,whereas,the secondary bud was identified in the near view within the concurrent bud mapping area,in order to improve the recognition accuracy of small target objects in the field.The original images(2132 images)were randomly divided into the training set(1735 images)and validation set(397 images)in the ratio of 8:2.The training set and validation set were uniformly cropped to 2944×1656 pixel size,and then scaled to 1280×720 pixel size with 2.3 times of equal proportion.Data augmentation was carried out by three random combinations of augmentation through scaling,flipping,and color gamut transformation for the concurrent and secondary bud datasets.Once the augmentation scale was too large for the target features,the images were manually excluded from the dataset.As such,the training set was enlarged to 15840 images.YOLOv5m and YOLOv5s were selected as the network models for juxtaposed bud and parabudding detection.The sizes of the trained models were 42.1 and 14.4 MB,respectively,with AP of 0.702 and 0.773,F1 scores of 0.685 and 0.765 on the test set of concurrent and secondary bud images,respectively,where the average inference time per image was 10.62 and 7.01 ms.60 compound bud images showed that the overall average accuracy of this improved model was 0.905,and the average detection time was 18.1 ms for each process;(2)An online detection and position were then proposed for the compound bud in the field.The online detection and position w
关 键 词:图像处理 定位 深度学习 葡萄 复芽 RGB-D 抹芽
分 类 号:S126[农业科学—农业基础科学] S24
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.38