基于强化学习策略的梯度下降学习求解GCP  

Gradient descent learning based on reinforcement learning strategy for solving GCP

在线阅读下载全文

作  者:宋家欢 王晓峰[1,2] 胡思敏 姚佳兴 锁小娜 Song Jiahuan;Wang Xiaofeng;Hu Simin;Yao Jiaxing;Suo Xiaona(School of Computer Science&Engineering,North Minzu University,Yinchuan 750021,China;Laboratory of Image&Graphics Intelligent Processing of State Ethnic Affairs Commission,North Minzu University,Yinchuan 750021,China)

机构地区:[1]北方民族大学计算机科学与工程学院,银川750021 [2]北方民族大学图形图像智能处理国家民委重点实验室,银川750021

出  处:《计算机应用研究》2025年第4期1011-1017,共7页Application Research of Computers

基  金:国家自然科学基金资助项目(62062001);宁夏青年拔尖人才资助项目(2021);宁夏自然科学基金项目(2024AAC03165)。

摘  要:图着色问题(graph coloring problem,GCP)是经典的组合优化问题,其目标是为图的每个顶点分配不同的颜色,使得相邻顶点的颜色不同,同时尽可能减少所用颜色的数量。GCP属于NP难问题,传统求解方法(如贪心算法、启发式搜索和进化算法)往往因计算复杂度高而受限,且易陷入局部最优解。为了解决这些问题,提出了一种基于强化学习策略(reinforcement learning strategy,RLS)的梯度下降学习方法来求解GCP。具体而言,将GCP转换为强化学习中的策略优化问题,通过设计策略梯度算法,将图的着色状态映射为强化学习的状态,将颜色分配视为动作,以目标函数的负值作为奖励信号,逐步优化着色策略。实验结果表明,所提方法在不同类型和规模的图实例上均优于传统启发式算法,尤其在高维度和复杂约束条件下表现出较强的全局探索能力和收敛性。该研究表明,基于强化学习的图着色方法为在解决复杂组合优化问题上具有广泛的应用潜力,为图着色及其衍生问题提供了有效的求解新路径。The GCP is a classical combinatorial optimization problem that aimed to assign different colors to each vertex in a graph,ensuring that adjacent vertices have different colors while minimizing the total number of colors used.As an NP-hard problem,GCP presents challenges for traditional solution methods,such as greedy algorithms,heuristic search,and evolutio-nary algorithms,which are often limited by high computational complexity and a tendency to get trapped in local optima.To address these issues,this paper proposed a gradient descent learning method based on RLS for solving GCP.Specifically,it reformulated GCP as a policy optimization problem within the reinforcement learning framework,designing a policy gradient algorithm that maps graph coloring states to reinforcement learning states,treats color assignments as actions,and used the negative objective function value as a reward signal to iteratively optimize the coloring strategy.Experimental results demonstrate that the proposed method outperforms conventional heuristic algorithms across various types and scales of graph instances,showing strong global exploration capabilities and convergence,especially in high-dimensional and complex constraint scenarios.This study shows that the reinforcement learning-based approach to graph coloring holds broad potential for complex combinatorial optimization problems,offering an effective new solution pathway for GCP and related problems.

关 键 词:图着色问题 强化学习策略 梯度下降 组合优化问题 

分 类 号:TP301.6[自动化与计算机技术—计算机系统结构]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象