A multi-objective reinforcement learning algorithm for deadline constrained scientific workflow scheduling in clouds  

在线阅读下载全文

作  者:Yao QIN Hua WANG Shanwen YI Xiaole LI Linbo ZHAI 

机构地区:[1]School of Computer Science and Technology,Shandong University,Jinan 250101,China [2]Shanghai Police College,Shanghai 200137,China [3]School of Software,Shandong University,Jinan 250101,China [4]School of Information Science and Engineering,Linyi University,Linyi 276005,China [5]School of Information Science and Engineering,Shandong Normal University,Jinan 250014,China

出  处:《Frontiers of Computer Science》2021年第5期25-36,共12页中国计算机科学前沿(英文版)

基  金:the National Natural Science Foundation of China(Grant No.61672323);the Fundamental Research Funds of Shandong University(2017JC043);the Key Research and Development Program of Shandong Province(2017GGX10122,2017GGX10142,and 2019JZZY010134);the Natural Science Foundation of Shandong Province(ZR2019MF072).

摘  要:Recently,a growing number of scientific applications have been migrated into the cloud.To deal with the problems brought by clouds,more and more researchers start to consider multiple optimization goals in workflow scheduling.However,the previous works ignore some details,which are challenging but essential.Most existing multi-objective work-flow scheduling algorithms overlook weight selection,which may result in the quality degradation of solutions.Besides,we find that the famous partial critical path(PCP)strategy,which has been widely used to meet the deadline constraint,can not accurately reflect the situation of each time step.Work-flow scheduling is an NP-hard problem,so self-optimizing algorithms are more suitable to solve it.In this paper,the aim is to solve a workflow scheduling problem with a deadline constraint.We design a deadline constrained scientific workflow scheduling algorithm based on multi-objective reinforcement learning(RL)called DCMORL.DCMORL uses the Chebyshev scalarization function to scalarize its Q-values.This method is good at choosing weights for objectives.We propose an improved version of the PCP strategy called MPCP.The sub-deadlines in MPCP regularly update during the scheduling phase,so they can accurately reflect the situation of each time step.The optimization objectives in this paper include minimizing the execution cost and energy consumption within a given deadline.Finally,we use four scientific workflows to compare DCMORL and several representa-tive scheduling algorithms.The results indicate that DCMORL outperforms the above algorithms.As far as we know,it is the first time to apply RL to a deadline constrained workflow scheduling problem.

关 键 词:workflow scheduling energy saving multiobjective reinforcement learning deadline constrained cloud computing 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象