检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:刘佳铭 段静玄[2] 张学良 林静 LIU Jiaming;DUAN Jingxuan;ZHANG Xueliang;LIN Jing(The Fifth Naval Military Representative Office of Shanghai Bureau of Naval Equipment Department in Shanghai,Shanghai 200135;China Ship Development and Design Center,Wuhan 430064)
机构地区:[1]海装上海局驻上海地区第五代表室,上海200135 [2]中国舰船研究设计中心,武汉430064
出 处:《舰船电子工程》2022年第11期60-64,共5页Ship Electronic Engineering
摘 要:GNSS测量精度是准确标校雷达的基础,为了提高GNSS定位精度构建了强化学习框架来,该框架无需对GNSS设备硬件参数或运动模型做出严格的假设,自动寻优最佳策略来实现原始GNSS观测的“校正”。强化学习模型使用了一种有效的基于置信度的奖励机制,该机制独立于地理位置,从而使模型具有泛化性。通过与扩展卡尔曼滤波器算法进行比较来评估模型的性能。实验表明,与基准扩展卡尔曼滤波器模型相比,所提出的强化学习模型收敛速度快,预测方差较小,并且可以将测向定位误差减少50%。GNSS measurement accuracy is the basis of radar calibration. In this paper,a reinforcement learning framework is proposed to improve GNSS positioning accuracy. This framework does not require strict assumptions on hardware parameters or motion models of GNSS devices. The proposed reinforcement learning model uses an optimal strategy to "correct" the original GNSS observations. The model uses an effective trust-based reward mechanism,which is independent of geographical location,thus making the model generalizable. The performance of the model is evaluated by comparing with the extended Kalman filter algorithm. Experimental results show that compared with the benchmark extended Kalman filter model,the proposed reinforcement learning model has faster convergence speed,smaller prediction variance,and can reduce the direction finding and positioning error by 50%.
分 类 号:TN958[电子电信—信号与信息处理]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.117