Vision Transformer的瞳孔定位方法  

Pupil Localization Method based on Vision Transformer

在线阅读下载全文

作  者:王利 王长元[2] WANG Li;WANG Changyuan(School of Opto Electronical Engineering,Xi’an Technological University,Xi’an 710021,China;School of Computer Science and Engineering,Xi’an Technological University,Xi’an 710021,China)

机构地区:[1]西安工业大学光电工程学院,西安710021 [2]西安工业大学计算机科学与工程学院,西安710021

出  处:《西安工业大学学报》2023年第6期561-567,共7页Journal of Xi’an Technological University

基  金:国家自然科学基金项目(52072293)。

摘  要:为了解决现有瞳孔定位方法易受瞳孔图像质量的约束,采用CNN提取图像的局部特征,通过Transformer的编码器获得全局依赖关系,发掘出更为准确的瞳孔中心信息,在公开数据集上对比了主流的DeepEye和VCF瞳孔定位模型。结果表明:提出的基于混合结构的Vision Transformer瞳孔定位方法在5像素误差内瞳孔中心的检测率比DeepEye提升了30%,比VCF提升了20%。Existing pupil localization methods are easily constrained by the quality of the pupil image.To solve this problem,CNN is adopted to extract the local features of images,and then the encoder of a transformer is used to obtain global dependencies,with more accurate information on the pupil center excavated.And then a comparison is made between the mainstream DeepEye and VCF pupil localization models on the public dataset.It is found that the proposed hybrid structure based Vision Transformer pupil localization method has a detection rate 30%higher than that of DeepEye and 20%higher than that of VCF within 5 pixel error range.

关 键 词:深度学习 瞳孔定位 视觉转换器 分散注意力残差网络 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象