检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:邹耀斌[1,2] 李汪洋 Zou Yaobin;Li Wangyang(Hubei Key Laboratory of Intelligent Vision Based Monitoring for Hydroelectric Engineering,Yichang 443002,China;College of Computer and Information Technology,China Three Gorges University,Yichang 443002,China)
机构地区:[1]水电工程智能视觉监测湖北省重点实验室(三峡大学),宜昌443002 [2]三峡大学计算机与信息学院,宜昌443002
出 处:《国外电子测量技术》2024年第7期33-45,共13页Foreign Electronic Measurement Technology
基 金:国家自然科学基金(61871258)项目资助。
摘 要:为了提高现有最大类间方差法(OTSU)的阈值化精度和适应性,提出了一种对称约束的类间方差阈值方法。该方法首先对输入图像使用Prewitt算子构建梯度幅值图像,并根据对称性原则提取对称采样区;然后,基于构建的对称约束类间方差目标函数最大化准则选取阈值,并判断在此阈值下对称采样区是否满足对称条件;当无法满足对称条件时,基于对称采样区对输入图像进行对称修正处理,并应用对称约束的类间方差目标函数对修正后的对称采样区选取阈值;最后,使用最终选取的阈值对输入图像阈值化。在28幅合成图像和70幅真实世界图像集上比较了提出的方法与OTSU法及4种OTSU的改进方法的阈值化性能。实验结果表明,提出方法的误分类率在合成图像和真实世界图像上分别为0.0106和0.016,相较于阈值化精度第2的方法在误分类方面分别降低了91.4%和86.1%。提出的方法虽然在计算效率方面不占有优势,但它对不同模态的测试图像具有更稳健的阈值化适应性和更高的阈值化精度。In order to improve the thresholding accuracy and adaptability of the existing OTSU thresholding method,a symmetry-constrained between-class variance thresholding method is proposed.The proposed approach initially employs the Prewitt operator to construct a gradient magnitude image from the input image,followed by the extraction of symmetric sampling areas based on the principle of symmetry.Then,a threshold is selected based on the symmetryconstrained between-class variance objective function maximization criterion,and it is judged whether the symmetric sampling areas satisfy the symmetry condition under this threshold.If the symmetry condition cannot be satisfied,the input image is processed with symmetry correction based on the symmetric sampling areas.The threshold is then selected using the symmetry-constrained between-class variance objective function on the rectified symmetric sampling area.Finally,the chosen threshold is utilized for thresholding the input image.The performance of the proposed method is compared with OTSU's method and four improved methods of OTSU's method on a dataset comprising 28 synthetic images and 70 real-world images.Experimental results demonstrate that the proposed method achieves a misclassification error rate of 0.0106 and 0.016 on synthetic and real-world images,respectively.In comparison to the second-best method in terms of thresholding accuracy,the proposed method reduces the misclassification error rates by 91.4%and 86.1%on synthetic and real-world images,respectively.Although the proposed method does not exhibit superiority in terms of computational efficiency,it demonstrates a more robust thresholding adaptability and higher thresholding accuracy across diverse modalities of test images.
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.133.141.175