基于强化学习的人员轮休调度方法  

Day-off scheduling approach based on reinforcement learning

在线阅读下载全文

作  者:李甜甜 陈德胜 曹斌 LI Tiantian;CHEN Desheng;CAO Bin(College of Computer Science and Technology/College of Software,Zhejiang University of Technology,Hangzhou 310023,China)

机构地区:[1]浙江工业大学计算机科学与技术学院(软件学院),浙江杭州310023

出  处:《计算机集成制造系统》2024年第10期3566-3577,共12页Computer Integrated Manufacturing Systems

基  金:浙江省自然科学基金资助项目(LQ21F020019);浙江省重点研发计划资助项目(2022C01145)。

摘  要:针对传统调度方法求解效果差、效率低、轮休约束表达不准确的问题,首次提出一种基于强化学习的人员轮休调度方法。该方法将轮休调度过程构建为Markov决策过程,利用动作掩码方法实现轮休约束,通过深度Q网络(DQN)方法对轮休调度的策略进行学习。最后,采用学习得到的调度策略对人员进行快速安排。实验表明,在遵守轮休约束的前提下,该方法能够快速给出匹配每日人力需求的人员安排。对比传统的基于遗传的方法,该方法在人力需求拟合上的安排偏差更小,求解效率更高。Aiming at the problems of poor performance,low efficiency and inaccuracy to express constraints of traditional scheduling approaches,a day-off scheduling approach based on reinforcement learning was proposed.In this approach,the day-off scheduling process was regarded as a Markov Decision Process(MDP),and an action mask method was utilized for expressing scheduling constraints.Deep Q-Network(DQN)was developed for learning scheduling strategies from MDP.Finally,the learned scheduling strategies were used to generate scheduling results following daily workload efficiently under constraints.Compared to traditional Genetic Algorithm(GA)methods,experimental results showed that the proposed method had less variance and was more efficient.

关 键 词:轮休调度 强化学习 MARKOV决策过程 深度Q网络 动作掩码 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象