检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Akihiro Matsufuji Wei-Fen Hsieh Eri Sato-Shimokawara Toru Yamaguchi
出 处:《Journal of Mechanics Engineering and Automation》2019年第3期92-99,共8页机械工程与自动化(英文版)
摘 要:We propose a learning architecture for integrating multi-modal information e.g., vision, audio information. In recent years, artificial intelligence (AI) is making major progress in key tasks like a language, vision, voice recognition tasks. Most studies focus on how AI could achieve human-like abilities. Especially, in human-robot interaction research field, some researchers attempt to make robots talk with human in daily life. The key challenges for making robots talk naturally in conversation are to need to consider multi-modal non-verbal information same as human, and to learn with small amount of labeled multi-modal data. Previous multi-modal learning needs a large amount of labeled data while labeled multi-modal data are shortage and difficult to be collected. In this research, we address these challenges by integrating single-modal classifiers which trained each modal information respectively. Our architecture utilized knowledge by using bi-directional associative memory. Furthermore, we conducted the conversation experiment for collecting multi-modal non-verbal information. We verify our approach by comparing accuracies between our system and conventional system which trained multi-modal information.
关 键 词:MULTI-MODAL learning bi-directional ASSOCIATIVE MEMORY NON-VERBAL HUMAN robot interaction
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.225.72.2