基于改进HopeNet的头部姿态估计方法  

Head pose estimation method based on improved HopeNet

在线阅读下载全文

作  者:张立国[1] 胡林[1] ZHANG Liguo;HU Li(Measurement Technology and Instrumentation Key Laboratory,Yanshan University,Qinhuangdao 066004)

机构地区:[1]燕山大学测试计量技术与仪器重点实验室,秦皇岛066004

出  处:《高技术通讯》2024年第5期486-495,共10页Chinese High Technology Letters

基  金:河北省中央引导地方专项(199477141G)资助项目。

摘  要:针对基于无需先验知识的头部姿态估计算法在复杂背景图像和多尺度图像场景下精度较差的问题,提出了一种基于改进HopeNet的头部姿态估计方法。首先在主干网络结构上增加特征融合结构使得模型能够充分利用网络的深层特征信息与浅层特征信息,提升模型的特征解析力;然后在主干网络的残差结构中增加特征压缩激励模块,使得网络能够自适应学习不同特征层重要程度的权重信息,让模型更加关注目标信息。实验结果表明,相较于HopeNet,本文方法在AFLW2000数据集上精度提升了31.15%,平均误差降到4.20°,同时在复杂背景图像场景下有较好的鲁棒性。Aiming at the poor accuracy of the head pose estimation algorithm based on no prior knowledge in complex background images and multi-scale image scenes, a head pose estimation method based on improved HopeNet is proposed. Firstly, the feature fusion structure is added to the backbone network structure to make the model make full use of the deep and shallow feature information of the network and improve the feature analysis power of the model. Then feature squeeze and excitation module is added to the residual structure of the backbone network, so that the network can adaptively learn the weight information of different feature layers and the model can pay more attention to the target information. Experimental results show that compared with HopeNet, the accuracy of the pro-posed method on AFLW2000 dataset is improved by 31. 15%, and the average error is reduced to 4. 20 °. Mean-while, the proposed method has good robustness in complex background image scenes.

关 键 词:头部姿态估计 HopeNet 特征融合 特征压缩激励 自适应学习 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象