检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:苏迪 张成[1] 王柯 孙凯 SU Di;ZHANG Cheng;WANG Ke;SUN Kai(Key Laboratory of Dynamics and Control of Flight Vehicle,Ministry of Education,School of Aerospace Engineering,Beijing Institute of Technology,Beijing 100081,China)
机构地区:[1]北京理工大学宇航学院,飞行器动力学与控制教育部重点实验室,北京100081
出 处:《北京理工大学学报》2023年第7期734-743,共10页Transactions of Beijing Institute of Technology
摘 要:针对空间非合作目标在轨服务中的位姿估计问题,提出了一种基于卷积神经网络的两阶段相对位姿视觉估计算法.该算法在阶段1中结合位置回归和检测模块,将检测后的图像输入阶段2中,并对任务过程中的绕飞和接近两种情况分别设计姿态估计模型,绕飞时采用分类代替回归的间接方法,接近时采用直接回归方法估计姿态,实现了对非合作目标在轨服务过程的位姿估计.充足的消融实验验证了各阶段模型的有效性,仿真实验位置精度可达0.1836 m,姿态精度可达2.9489°,表明了基于卷积神经网络的单目视觉方法应用于非合作目标在轨服务中位姿估计的可行性.A two-stage relative pose estimation algorithm based on convolutional neural network was proposed to solve the problem of pose estimation for space noncooperative targets in orbit service.The detection module was combined with translation regression module in the first stage,and the detected image was input into stage two.An attitude estimation model was designed for flight around and flight approach during the mission.The indirect method of classification instead of regression was used in flying around,and the direct regression method was adopted to estimate the attitude when approaching,so as to realize pose estimation of noncooperative targets in orbit service process.A large-scale dataset is introduced,which can be utilized as a benchmark for pose estimation methods.Abundant ablation studies verified the effectiveness of each module.The position accuracy could reach 0.1836 meters and attitude accuracy could reach 2.9489 degrees,which shows the feasibility of monocular vision method based on convolutional neural network to estimate the pose of noncooperative targets in orbit service.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222