检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:王立玲[1,2] 李森 马东 WANG Liling;LI Sen;MA Dong(College of Electronic and Information Engineering,Hebei University,Baoding Hebei 071002,China;Key Laboratory of Digital Medical Engineering of Hebei Province,Baoding Hebei 071002,China)
机构地区:[1]河北大学电子信息工程学院,河北保定071002 [2]河北省数字医疗工程重点实验室,河北保定071002
出 处:《机床与液压》2021年第24期17-22,共6页Machine Tool & Hydraulics
基 金:国家自然科学基金项目(61703133);国家重点研发计划(2017YFB1401200)。
摘 要:针对移动机器人室内定位过程中,单目视觉难适应光照变化、里程计累计误差导致定位误差较大问题,提出边缘侧多传感器融合的定位方法。以稀疏直接法(半直接法)作为单目视觉的前端,实时单目相机估计位姿,通过惯性传感器恢复尺度输出位置信息,并且获取IMU的加速度以及偏航角、里程计当前速度,通过扩展卡尔曼滤波算法融合3种传感器信息,实现更加精确的定位。在移动机器人侧处理传感器读取的信息,从而减小机器人体积。边缘侧混合式多传感器信息融合使移动机器人在单个传感器失效以及无法人为干预时,也能够精确实时地在多种复杂环境中完成自主定位。Aiming at the problem that monocular vision is difficult to adapt to light changes during the indoor positioning of mobile robots,and the cumulative error of odometer leads to positioning errors,a positioning method based on edge measurement and multisensor fusion was proposed.Using sparse direct method(semi-direct method)as the front end of monocular vision,real-time monocu⁃lar camera was used to estimate position and attitude,inertial sensor was used to recovery scale output position information,IMU accel⁃eration,yaw angle and current speed of odometer were obtained.The multiple sensor information was fused through extension Kalman filter algorithm for more precise positioning.The information read by the sensors was processed on the mobile robot side to reduce the robot volume.Edge-side hybrid multi-sensor information fusion enables mobile robots to accurately and real-time complete autonomous positioning in a complex environments even when the sensors are reset and cannot be interfered by human intervention.
关 键 词:多传感器融合 扩展卡尔曼滤波 单目视觉 移动机器人
分 类 号:TP242.6[自动化与计算机技术—检测技术与自动化装置]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.117