首页>
外国专利>
A Distributed Generalization and Acceleration Strategy for Convex Optimization Problem
A Distributed Generalization and Acceleration Strategy for Convex Optimization Problem
展开▼
机译:凸优化问题的分布式广义加速策略。
展开▼
页面导航
摘要
著录项
相似文献
摘要
#$%^&*AU2020100078A420200213.pdf#####Abstract In this patent, an accelerated distributed strategy based on Nesterov gradient method for solving convex optimization problems which are defined in a random undirected networked multi-agent system is proposed. The algorithm mainly comprises five parts including determining parameter; variable initialization; exchanging information; computing gradient; updating variable. By implementing a single doubly-stochastic weight matrix, the algorithm which is set forth in the present invention is to unify, generalize, and improve convergence rate of some typical exact distributed first-order algorithms. Under conditions that the global objective function is strongly convex and each local objective function has Lipschitz continuous gradient, the proposed algorithm can linearly converge to the global optimization solution with proper when the largest step-size is positive and less than an explicitly estimated upper bound, and the largest momentum parameter is nonnegative and less than an upper bound determined by the largest step-size. The present invention has broad application in large-scale machine learning.1/4 (Start) Each agent sets k--0 and maximum number of iterations, kmax Each agent initializes local variables Compute system parameters Select a step-size and momentum parameters according to the computing parameters Each agent receives the variables from its inneighbors and sends the variables to its outneighbors Each agent updates the variables and computes the gradient Each agent sets k--k+1 kk.x?N Y SEnd Fig. 1
展开▼