基于渐增式奖励上界的最大k-plex问题求解  

Incremental reward based on upper bound for solving maximum k-plex problem

在线阅读下载全文

作  者:刘燕丽[1] 迟思义 刘浪 何琨[2] LIU Yanli;CHI Siyi;LIU Lang;HE Kun(School of Science,Wuhan University of Science and Technology,Wuhan 430078,China;School of Computer Science and Technology,Huazhong University of Science and Technology,Wuhan 430074,China)

机构地区:[1]武汉科技大学理学院,湖北武汉430078 [2]华中科技大学计算机科学与技术学院,湖北武汉430074

出  处:《华中科技大学学报(自然科学版)》2024年第11期43-49,共7页Journal of Huazhong University of Science and Technology(Natural Science Edition)

基  金:国家自然科学基金资助项目(U22B2017);湖北省教育厅资助项目(Q20211111)。

摘  要:针对最大k-plex完备算法的分支策略影响剪枝率这一问题,提出一种基于强化学习和上界奖励的顶点策略.该策略基于划分式定界函数获得上界,利用上界的变化值奖励分支顶点,评估分支动作对子问题的影响程度;实现了渐增式的奖励计算,以减少学习代价;结合候选集划分为分支集和非分支集的方法,优先选择分支集中累计奖励最大的顶点.为了评估所提出顶点策略的效率,测试了来自10thDIMACS和Real-world大规模稀疏图的221个图例.实验结果表明:相比于目前先进的KpLeX和Maplex算法,提出的IRkplex算法的平均求解时间减少了2.00%~23.71%,说明该顶点策略可以有效提高算法的剪枝率.Aiming at the problem that the pruning rate was affected by branching strategy for exact maximum k-plex algorithm,a vertex strategy was proposed based on reinforcement learning and upper bound reward.This strategy gained an upper bound by partitioning the candidate set,and rewarded the branching vertex with the variation of upper bound to evaluate the impact of branching actions on subgraphs.An incremental reward computation was implemented to reduce the learning costs.By incorporating the method of dividing the candidate set into branch set and non-branch set,the new strategy prioritized selecting the vertex with the highest accumulated reward from the branch set.To evaluate the performance of the proposed strategy,it was tested on 211 graphs from 10thDIMACS and Real-word benchmarks.Experimental results show that compared to the state-of-the-art KpLeX and Maplex algorithms,the proposed IRkplex algorithm reduces the average running time by 2.00%~23.71%,indicating that the vertex strategy can effectively improve the pruning rate.

关 键 词:最大k-plex问题 非确定多项式复杂度的难度问题 强化学习 奖励函数 分支策略 

分 类 号:TP302[自动化与计算机技术—计算机系统结构]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象