检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:曲海成[1] 张旺 QU Hai-Cheng;ZHANG Wang(School of Software,Liaoning Technical University,Huludao 125105,China)
机构地区:[1]辽宁工程技术大学软件学院,葫芦岛125105
出 处:《计算机系统应用》2023年第11期276-285,共10页Computer Systems & Applications
摘 要:针对复杂环境下指针式仪表检测与识别过程中存在定位仪表困难和推理精度低等问题,本文提出一种基于YOLOv7+U2-Net的多量程仪表识别方法.为了提高U2-Net模型的输入图像质量,选择推理精度和速度快的YOLOv7检测器,将检测、裁剪好的图像作为模型的输入图像数据集,同时对输入图像进行了旋转矫正,使模型适用于多角度仪表识别.针对仪表读数推理精度差和速度慢等问题,将U2-Net解码阶段的RSU4-RSU7的普通卷积更换成了深度可分离卷积,在此基础上引入了Attention机制,加快整体推理速度和精度.此外,为了提高该方法的普遍适用性,提出了多阈值范围内的识别准确率判别方法来适配多种应用场景.通过对比实验表明,在收集到的数据集上进行评估,相较于模板匹配、SegNet、PSPNet、Deeplabv3+及U-Net方法,本文方法识别准确率达到96.5%,在多个阈值区间内都具有良好性能表现.This study proposes a multi-range instrument recognition method based on YOLOv7+U2-Net to address the difficulties in locating instruments and low inference accuracy in the detection and recognition process of pointer instruments in complex environments.In order to improve the input image quality of the U2-Net model,a YOLOv7 detector with high inference accuracy and speed is selected.The detected and cropped images are used as the input image dataset of the model.At the same time,rotation correction is applied to the input image,making the model suitable for multi-angle instrument recognition.In response to issues such as poor accuracy and slow inference speed of instrument readings,the ordinary convolution of RSU4-RSU7 in the U2-Net decoding stage has been replaced with deep separable convolution.On this basis,an Attention mechanism has been introduced to accelerate the overall inference speed and accuracy.In addition,in order to improve the universal applicability of this method,a recognition accuracy discrimination method within multiple threshold ranges is proposed to adapt to various application scenarios.Through comparative experiments,it has been shown that when evaluated on the collected dataset,compared with template matching,SegNet,PSPNet,Deeplabv3+,and U-Net methods,the proposed method achieves a recognition accuracy of 96.5%and performs well in multiple threshold ranges.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222