机器人模仿学习的非接触观测控制图模型  被引量:4

Cybernetic-Graphic Model for Robot Imitation Learning Based on Non-contact Observation

在线阅读下载全文

作  者:杨俊友[1] 马乐[1] 白殿春[1] 东俊光 

机构地区:[1]沈阳工业大学电气工程学院,辽宁沈阳110870 [2]村田机械L&A部

出  处:《机器人》2014年第3期309-315,共7页Robot

基  金:国家自然科学基金资助项目(51075281);教育部高等学校博士学科点专项科研基金资助项目(20112102110002);辽宁省自然科学基金资助项目(201102163)

摘  要:提出一种新的基于非接触观测信息的机器人模仿学习表征与执行的控制图模型.建立可模仿学习的人-机关系,并得出模仿学习前提条件是以系统末端微分运动为基本行为元.提出控制图模型结构和基于视觉观测序列的模型学习方法.提出基于累积和瞬时相关函数的观测序列分割和图结构生成方法,和基于RBF(径向基函数)网络的行为元目标学习方法.通过不同结构和自由度的机器人毛笔绘画和物体抓取模仿学习实例实验,证明了所提出模型在视觉观测信息下能够表征与执行不同层次和类型的行为,具有良好的泛化能力、通用性及实用性.The cybernetic graphic model (CGM), a new model of behavioral representation and reproduction, based on non-contact observation for robot imitation learning is proposed. The human-robots relationship is built for imitating the behaviors from demonstration of human, and the pre-condition of imitation learning is analyzed to be that differential motions of end-effector of system are used as the behavioral primitives. Architecture of CGM and the learning method based on visual observation sequences are proposed. The segmenting method of sequences based on accumulating and instantaneous correlation function for generation of graphic structure of CGM and the learning method of behavioral primitive target based on RBF (radial basis function) networks are proposed. The brush drawing and object grasping experiments are performed with different types and degrees of freedom of robots. The results show that the proposed CGM based on visual observation can represent and reproduce different levels and types of behaviors, and is powerful in generalization, generality and utility of imitation learning.

关 键 词:机器人行为 模仿学习 控制图模型 非接触观测信息 视觉观测 

分 类 号:TP242.6[自动化与计算机技术—检测技术与自动化装置]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象