Learning to sample initial solution for solving 0-1 discrete optimization problem by local search  被引量:1

在线阅读下载全文

作  者:Xin Liu Jianyong Sun Zongben Xu 

机构地区:[1]School of Mathematics and Statistics,Xi’an Jiaotong University,Xi’an,710049,China

出  处:《Science China Mathematics》2024年第6期1317-1340,共24页中国科学(数学)(英文版)

基  金:supported by National Natural Science Foundation of China(Grant Nos.11991023 and 62076197);Key Research and Development Project of Shaanxi Province(Grant No.2022GXLH01-15)。

摘  要:Local search methods are convenient alternatives for solving discrete optimization problems(DOPs).These easy-to-implement methods are able to find approximate optimal solutions within a tolerable time limit.It is known that the quality of the initial solution greatly affects the quality of the approximated solution found by a local search method.In this paper,we propose to take the initial solution as a random variable and learn its preferable probability distribution.The aim is to sample a good initial solution from the learned distribution so that the local search can find a high-quality solution.We develop two different deep network models to deal with DOPs established on set(the knapsack problem)and graph(the maximum clique problem),respectively.The deep neural network learns the representation of an optimization problem instance and transforms the representation to its probability vector.Experimental results show that given the initial solution sampled from the learned probability distribution,a local search method can acquire much better approximate solutions than the randomly-sampled initial solution on the synthesized knapsack instances and the Erd?s-Rényi random graph instances.Furthermore,with sampled initial solutions,a classical genetic algorithm can achieve better solutions than a random initialized population in solving the maximum clique problems on DIMACS instances.Particularly,we emphasize that the developed models can generalize in dimensions and across graphs with various densities,which is an important advantage on generalizing deep-learning-based optimization algorithms.

关 键 词:discrete optimization deep learning graph convolutional network local search 

分 类 号:O224[理学—运筹学与控制论] TP18[理学—数学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象