机构地区:[1]Guangdong-Hong Kong-Macao Greater Bay Area Artificial Intelligence Application Technology Research Institute,Shenzhen Polytechnic University,Shenzhen,518055,China [2]School of Computer Science and Software Engineering,University of Science and Technology Liaoning,Anshan,114051,China
出 处:《Computers, Materials & Continua》2025年第2期1985-1999,共15页计算机、材料和连续体(英文)
基 金:supported by Guangdong Province Rural Science and Technology Commissioner Project,Zen Tea Reliable Traceability and Intelligent Planting Key Technology Research and Development,Promotion and Application(KTP20210199);Special Project of Guangdong Provincial Education Department,Research on Abnormal Behavior Recognition Technology of Pregnant Sows Based onGraph Convolution(2021ZDZX1091);Guangdong Basic and Applied Basic Research Foundation under Grant 2023A1515110729;Shenzhen Science and Technology Program under Grant 20231128093642002;the Research Foundation of Shenzhen Polytechnic University under Grant 6023312007K.
摘 要:This paper introduces an advanced and efficient method for distributed drone-based fruit recognition and localization, tailored to satisfy the precision and security requirements of autonomous agricultural operations. Our method incorporates depth information to ensure precise localization and utilizes a streamlined detection network centered on the RepVGG module. This module replaces the traditional C2f module, enhancing detection performance while maintaining speed. To bolster the detection of small, distant fruits in complex settings, we integrate Selective Kernel Attention (SKAttention) and a specialized small-target detection layer. This adaptation allows the system to manage difficult conditions, such as variable lighting and obstructive foliage. To reinforce security, the tasks of recognition and localization are distributed among multiple drones, enhancing resilience against tampering and data manipulation. This distribution also optimizes resource allocation through collaborative processing. The model remains lightweight and is optimized for rapid and accurate detection, which is essential for real-time applications. Our proposed system, validated with a D435 depth camera, achieves a mean Average Precision (mAP) of 0.943 and a frame rate of 169 FPS, which represents a significant improvement over the baseline by 0.039 percentage points and 25 FPS, respectively. Additionally, the average localization error is reduced to 0.82 cm, highlighting the model’s high precision. These enhancements render our system highly effective for secure, autonomous fruit-picking operations, effectively addressing significant performance and cybersecurity challenges in agriculture. This approach establishes a foundation for reliable, efficient, and secure distributed fruit-picking applications, facilitating the advancement of autonomous systems in contemporary agricultural practices.
关 键 词:Objective detection deep learning machine learning
分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...