检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:张洋 赵尔迅 张科备 高晶敏[1] ZHANG Yang;ZHAO Erxun;ZHANG Kebei;GAO Jingmin(Beijing Information Science and Technology University,Beijing 100192,China;Beijing Institute of Control Engineering,Beijing 100094,China)
机构地区:[1]北京信息科技大学,北京100192 [2]北京控制工程研究所,北京100094
出 处:《空间控制技术与应用》2023年第6期113-122,共10页Aerospace Control and Application
基 金:国家自然科学基金项目资助项目(62001035)。
摘 要:为了确保探测器安全、准确地降落在地外天体表面,需实时测量探测器的下降速度,为制导导航与控制系统完成软着陆任务提供重要参考依据.提出一种仅利用光学相机的实时视觉测速方法,针对天体地表视频图像,采用递归全对场变换光流算法提取相邻帧间的光流场,然后通过深度神经网络的卷积层和池化层提取光流场对应的特征向量.为减小探测器降落过程中成像透视效应对测速精度的影响,构建一种适用于连续视频帧的长短期记忆网络,对特征向量和速度进行拟合,实现探测器着陆速度的实时估计.仿真实验结果表明,与基于前向传播网络的测速算法相比,本文方法的平均绝对百分比误差减小了11.98%,测量精度更高.In order to ensure safe and accurate landing on celestial bodies,it is necessary to measure the speed of space probe,providing important information for guidance,navigation and control system.A real-time visual velocity measurement method only based on optical camera is proposed in this paper.For terrain image sequences of celestial bodies,we use recurrent all-pairs field transforms to extract the optical flow field between adjacent frames.Then we extract the eigenvectors corresponding to the optical flow field via the convolution layer and pooling layer in deep neural networks.To reduce the influence on measurement accuracy caused by visual perspective during the landing process,a long-short term memory network for video sequences is constructed to match the eigenvectors up with velocities,thus a real-time landing speed estimation is achieved for space probes.Simulation results demonstrate that our technique decreases the mean absolute percentage error by 11.98%and has higher measurement accuracy in comparison to the forward propagation network.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.144.139.201