检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Biying Li Zhiwei Liu Wei Zhou Haiyun Guo Xin Wen Min Huang Jinqiao Wang
机构地区:[1]Foundation Model Research Center,Institute of Automation,Chinese Academy of Sciences,Beijing,100190,China [2]School of Artificial Intelligence,University of Chinese Academy of Sciences,Beijing,100083,China [3]Alpha(Beijing)Private Equity,Beijing,100083,China [4]School of Computer Science,National University of Defense Technology,Changsha,410073,China
出 处:《Machine Intelligence Research》2024年第3期514-525,共12页机器智能研究(英文版)
基 金:supported by the National Key R&D Program of China(No.2021YFE0205700);the National Natural Science Foundation of China(Nos.62076235,62276260 and 62002356);sponsored by the Zhejiang Lab(No.2021KH0AB07);the Ministry of Education Industry-University Cooperative Education Program(Wei Qiao Venture Group,No.E1425201).
摘 要:Self-attention aggregates similar feature information to enhance the features. However, the attention covers nonface areas in face alignment, which may be disturbed in challenging cases, such as occlusions, and fails to predict landmarks. In addition, the learned feature similarity variance is not large enough in the experiment. To this end, we propose structural dependence learning based on self-attention for face alignment (SSFA). It limits the self-attention learning to the facial range and adaptively builds the significant landmark structure dependency. Compared with other state-of-the-art methods, SSFA effectively improves the performance on several standard facial landmark detection benchmarks and adapts more in challenging cases.
关 键 词:Computer vision face alignment self-attention facial structure contextual information
分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.79