A System of Associated Intelligent Integration for Human State Estimation  

在线阅读下载全文

作  者:Akihiro Matsufuji Wei-Fen Hsieh Eri Sato-Shimokawara Toru Yamaguchi 

机构地区:[1]Department of Computer Science, Graduate School of Systems Design, Tokyo Metropolitan University, Hino, Tokyo 191-0065, Japan

出  处:《Journal of Mechanics Engineering and Automation》2019年第3期92-99,共8页机械工程与自动化(英文版)

摘  要:We propose a learning architecture for integrating multi-modal information e.g., vision, audio information. In recent years, artificial intelligence (AI) is making major progress in key tasks like a language, vision, voice recognition tasks. Most studies focus on how AI could achieve human-like abilities. Especially, in human-robot interaction research field, some researchers attempt to make robots talk with human in daily life. The key challenges for making robots talk naturally in conversation are to need to consider multi-modal non-verbal information same as human, and to learn with small amount of labeled multi-modal data. Previous multi-modal learning needs a large amount of labeled data while labeled multi-modal data are shortage and difficult to be collected. In this research, we address these challenges by integrating single-modal classifiers which trained each modal information respectively. Our architecture utilized knowledge by using bi-directional associative memory. Furthermore, we conducted the conversation experiment for collecting multi-modal non-verbal information. We verify our approach by comparing accuracies between our system and conventional system which trained multi-modal information.

关 键 词:MULTI-MODAL learning bi-directional ASSOCIATIVE MEMORY NON-VERBAL HUMAN robot interaction 

分 类 号:TH[机械工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象