双分支特征融合的视线估计算法  

Gaze estimation algorithm with dual-branch feature fusion

在线阅读下载全文

作  者:薛楠[1] 刘莉芬 李鹏程[1] XUE Nan;LIU Li-fen;LI Peng-cheng(Heilongjiang Province Key Laboratory of Pattern Recognition and Information Perception,Harbin University of Science and Technology,Harbin 150080,China)

机构地区:[1]哈尔滨理工大学模式识别与信息感知黑龙江省重点实验室,哈尔滨150080

出  处:《控制与决策》2025年第4期1247-1256,共10页Control and Decision

摘  要:视线估计是一种预测人眼注视位置或注视方向的技术,在人机交互和计算机视觉的应用中发挥重要作用.针对特征的差异性和利用率不全面的问题,提出双分支特征融合的视线估计算法.首先,构建Agent Swin Transformer网络与残差网络相结合的双分支网络模型,对视线特征进行提取,由改进的Agent Swin Transformer网络构成全局特征提取分支,逐层提取全局语义特征;由残差网络构成局部特征提取分支,提取不同尺度下的局部细节特征.通过特征融合将特征张量连接在一起,增强模型的表征能力.其次, Agent Swin Transformer网络融合高效多尺度注意力模块(EMA)及空间和信道重建卷积模块(SCConv),以加强特征,保持信息有效性,降低复杂性和计算成本.最后,结合头部姿态估计进行视线估计得到最终的视线方向,以减少干扰因素对眼部外观的影响.在MPIIFaceGaze数据集上进行大量实验,实验结果表明,该方法的视线估计角度平均误差为4.23°,同当前主流的同类方法相比,所提出算法能够更为准确地进行视线估计.Gaze estimation,which predicts the position or direction of human eye gaze,plays a crucial role in applications of human-computer interaction and computer vision.To address the issues of feature diversity and incomplete utilization,this paper proposes a dual-branch feature fusion gaze estimation algorithm.Firstly,a dualbranch network model combining an Agent Swin Transformer network with a residual network is constructed to extract gaze features.The improved Agent Swin Transformer network forms the global feature extraction branch,extracting global semantic features layer by layer,while the residual network forms the local feature extraction branch,extracting local detailed features at different scales.Through feature fusion,the feature tensors are concatenated to enhance the model's representation capability.Then,the Agent Swin Transformer network integrates the efficient multi-scale attention(EMA)module and spatial and channel reconstruction convolution(SCConv)module to strengthen features,maintain information effectiveness,and reduce complexity and computational costs.Finally,combined with head pose estimation,the gaze direction is estimated to mitigate the influence of interfering factors on eye appearance.Extensive experiments on the MPIIFaceGaze dataset demonstrate that the proposed method achieves an average gaze estimation angle error of 4.23°.Compared with current mainstream methods of similar kind,the proposed algorithm achieves more accurate gaze estimation.

关 键 词:视线估计 双分支 特征融合 Agent Swin Transformer 残差网络 空间和信道重建卷积 高效多尺度注意力 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象