Gradient-based algorithms for multi-objective bi-level optimization  被引量:1

在线阅读下载全文

作  者:Xinmin Yang Wei Yao Haian Yin Shangzhi Zeng Jin Zhang 

机构地区:[1]National Center for Applied Mathematics in Chongqing,Chongqing,401331,China [2]School of Mathematical Sciences,Chongqing Normal University,Chongqing,401331,China [3]Department of Mathematics,Southern University of Science and Technology,Shenzhen,518055,China [4]National Center for Applied Mathematics Shenzhen,Shenzhen,518000,China [5]Department of Mathematics and Statistics,University of Victoria,Victoria,BC,V8W 2Y2,Canada

出  处:《Science China Mathematics》2024年第6期1419-1438,共20页中国科学(数学)(英文版)

基  金:supported by the Major Program of National Natural Science Foundation of China(Grant Nos.11991020 and 11991024);supported by National Natural Science Foundation of China(Grant No.12371305);supported by National Natural Science Foundation of China(Grant No.12222106);Guangdong Basic and Applied Basic Research Foundation(Grant No.2022B1515020082);Shenzhen Science and Technology Program(Grant No.RCYX20200714114700072)。

摘  要:Multi-objective bi-level optimization(MOBLO)addresses nested multi-objective optimization problems common in a range of applications.However,its multi-objective and hierarchical bi-level nature makes it notably complex.Gradient-based MOBLO algorithms have recently grown in popularity,as they effectively solve crucial machine learning problems like meta-learning,neural architecture search,and reinforcement learning.Unfortunately,these algorithms depend on solving a sequence of approximation subproblems with high accuracy,resulting in adverse time and memory complexity that lowers their numerical efficiency.To address this issue,we propose a gradient-based algorithm for MOBLO,called gMOBA,which has fewer hyperparameters to tune,making it both simple and efficient.Additionally,we demonstrate the theoretical validity by accomplishing the desirable Pareto stationarity.Numerical experiments confirm the practical efficiency of the proposed method and verify the theoretical results.To accelerate the convergence of gMOBA,we introduce a beneficial L2O(learning to optimize)neural network(called L2O-gMOBA)implemented as the initialization phase of our gMOBA algorithm.Comparative results of numerical experiments are presented to illustrate the performance of L2O-gMOBA.

关 键 词:MULTI-OBJECTIVE bi-level optimization convergence analysis Pareto stationary learning to optimize 

分 类 号:O224[理学—运筹学与控制论] TP18[理学—数学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象