Structural Dependence Learning Based on Self-attention for Face Alignment  

在线阅读下载全文

作  者:Biying Li Zhiwei Liu Wei Zhou Haiyun Guo Xin Wen Min Huang Jinqiao Wang 

机构地区:[1]Foundation Model Research Center,Institute of Automation,Chinese Academy of Sciences,Beijing,100190,China [2]School of Artificial Intelligence,University of Chinese Academy of Sciences,Beijing,100083,China [3]Alpha(Beijing)Private Equity,Beijing,100083,China [4]School of Computer Science,National University of Defense Technology,Changsha,410073,China

出  处:《Machine Intelligence Research》2024年第3期514-525,共12页机器智能研究(英文版)

基  金:supported by the National Key R&D Program of China(No.2021YFE0205700);the National Natural Science Foundation of China(Nos.62076235,62276260 and 62002356);sponsored by the Zhejiang Lab(No.2021KH0AB07);the Ministry of Education Industry-University Cooperative Education Program(Wei Qiao Venture Group,No.E1425201).

摘  要:Self-attention aggregates similar feature information to enhance the features. However, the attention covers nonface areas in face alignment, which may be disturbed in challenging cases, such as occlusions, and fails to predict landmarks. In addition, the learned feature similarity variance is not large enough in the experiment. To this end, we propose structural dependence learning based on self-attention for face alignment (SSFA). It limits the self-attention learning to the facial range and adaptively builds the significant landmark structure dependency. Compared with other state-of-the-art methods, SSFA effectively improves the performance on several standard facial landmark detection benchmarks and adapts more in challenging cases.

关 键 词:Computer vision face alignment self-attention facial structure contextual information 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象