检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:闫忠心 白琳[1] 李陶深[1] YAN Zhongxin;BAI Lin;LI Taoshen(School of Computer and Electronic Information,Guangxi University,Nanning 530004,China)
机构地区:[1]广西大学计算机与电子信息学院,南宁530004
出 处:《小型微型计算机系统》2024年第2期461-469,共9页Journal of Chinese Computer Systems
基 金:国家自然科学基金项目(61966003)资助;广西自然科学基金项目(2020GXNSFAA159171)资助。
摘 要:为追求更准确的关键点检测结果,现有许多有关人体姿态估计研究多采用复杂的深度网络架构构建模型,忽略了模型的实际部署成本,导致模型在资源受限的边缘设备上很难实际部署,缺乏实用性.为了解决上述问题,本文设计了一种融合自我知识蒸馏和卷积压缩的轻量化人体姿态估计模型.该模型首先使用改进的EfficientNet网络构建一个编码器,提取图像的多尺度特征;其次,基于深度可分离转置卷积,设计一种轻量化上采样解码器,估计人体姿态;最后,采用轻量化多尺度双向融合与知识自我蒸馏方法,进一步提高人体姿态估计的准确性.在COCO和MPII标准数据集上进行了广泛的定性、定量和消融实验,实验结果表明所提出的模型不仅能获得准确的人体姿态估计,而且能显著降低模型的计算复杂性.In order to pursue higher accurate pose estimation,many existing methods use complex deep network architectures to build models,but ignore the actual deployment cost.As a result,state-of-the-art models are difficult to deploy on resource-constrained edge devices and lack practicality.To solve the above issues,this paper designs a lightweight human pose estimation model that integrates self-knowledge distillation and convolutional compression.First,the model uses an improved EfficientNet network to build an encoder to extract multi-scale features of the image;Second,a lightweight upsampling decoder,based on the depth-wise separable transposed convolution,is designed to estimate human pose;Finally,the model adopts a lightweight multi-scale bidirectional fusion module and self-knowledge distillation module to further improve the accuracy of human pose estimation.The experiments conducted on COCO and MPII datasets.With extensively qualitative,quantitative and ablation analysises,our experiments show that our model not only achieves state-of-the-art performance,but also significantly reduces the computational complexity.
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.137.156.0