检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:沈建凯 董天罡[1] Shen Jiankai;Dong Tiangang(College of Computer, Sichuan University Chengdu 610064 Sichuan China)
出 处:《中国民航飞行学院学报》2018年第3期5-9,共5页Journal of Civil Aviation Flight University of China
基 金:国家空管委办公室项目(GKG201410003)
摘 要:塔台管制员目前对场面飞机滑行路线分配主要依据航班序列、进离港类型以及目标终点等为参考。本文通过对管制员管制行为使用Q学习建模,并引入未来航班先验概率滑行路线加入待检测冲突集合,使管制员Agent具备预估未来时刻冲突能力,对常见滑行冲突得到最优滑行路径。本文首先讨论了Q学习对于地面冲突离散状态数量的有限性,接着对状态的动作序列和回报函数进行设计。实验环境对地面滑行道优先级进行了人工标注,生成随机航班时刻样本集用于训练。仿真结果中管制员Agent能有效解决冲突,体现了该方案的可行性和优越性。The path assignment of aircraft is mainly based on the schedule time, the type of flight-plan and the terminal point. In this paper, we use Q-Learning algorithm to study the behavior of controller, introduced the prior probability path to join the conflict path set to be detected. The Agent has the ability to predict the future conflict and to optimize taxing time for the situation of regular conflict while respecting aircraft separation and airport capacities. We begin by discussing the limited count of states in Q-Learning. Next, we design the actions and the reward function of the state. In the experiment, we have manually annotated the ground taxiway priority and generated random flightplan sample sets for training. The simulation results show that the controller Agent can solve the conflict effectively, which reflects the feasibility and superiority of this work.
分 类 号:V355[航空宇航科学与技术—人机与环境工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.117