Token Masked Pose Transformers Are Efficient Learners  

在线阅读下载全文

作  者:Xinyi Song Haixiang Zhang Shaohua Li 

机构地区:[1]School of Computer Science and Technology,Zhejiang Sci-Tech University,Hangzhou,310018,China [2]College of Artificial Intelligence,Nankai University,Tianjin,300350,China

出  处:《Computers, Materials & Continua》2025年第5期2735-2750,共16页计算机、材料和连续体(英文)

基  金:supported in part by the Scientific Research Start-Up Fund of Zhejiang Sci-Tech University,under the project titled“(National Treasury)Development of a Digital Silk Museum System Based on Metaverse and AR”(Project No.11121731282202-01).

摘  要:In recent years,Transformer has achieved remarkable results in the field of computer vision,with its built-in attention layers effectively modeling global dependencies in images by transforming image features into token forms.However,Transformers often face high computational costs when processing large-scale image data,which limits their feasibility in real-time applications.To address this issue,we propose Token Masked Pose Transformers(TMPose),constructing an efficient Transformer network for pose estimation.This network applies semantic-level masking to tokens and employs three different masking strategies to optimize model performance,aiming to reduce computational complexity.Experimental results show that TMPose reduces computational complexity by 61.1%on the COCO validation dataset,with negligible loss in accuracy.Additionally,our performance on the MPII dataset is also competitive.This research not only enhances the accuracy of pose estimation but also significantly reduces the demand for computational resources,providing new directions for further studies in this field.

关 键 词:Pattern recognition image processing neural network pose transformer 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象