首页> 外国专利> A Distributed Generalization and Acceleration Strategy for Convex Optimization Problem

A Distributed Generalization and Acceleration Strategy for Convex Optimization Problem

机译:凸优化问题的分布式广义加速策略。

摘要

#$%^&*AU2020100078A420200213.pdf#####Abstract In this patent, an accelerated distributed strategy based on Nesterov gradient method for solving convex optimization problems which are defined in a random undirected networked multi-agent system is proposed. The algorithm mainly comprises five parts including determining parameter; variable initialization; exchanging information; computing gradient; updating variable. By implementing a single doubly-stochastic weight matrix, the algorithm which is set forth in the present invention is to unify, generalize, and improve convergence rate of some typical exact distributed first-order algorithms. Under conditions that the global objective function is strongly convex and each local objective function has Lipschitz continuous gradient, the proposed algorithm can linearly converge to the global optimization solution with proper when the largest step-size is positive and less than an explicitly estimated upper bound, and the largest momentum parameter is nonnegative and less than an upper bound determined by the largest step-size. The present invention has broad application in large-scale machine learning.1/4 (Start) Each agent sets k--0 and maximum number of iterations, kmax Each agent initializes local variables Compute system parameters Select a step-size and momentum parameters according to the computing parameters Each agent receives the variables from its inneighbors and sends the variables to its outneighbors Each agent updates the variables and computes the gradient Each agent sets k--k+1 kk.x?N Y SEnd Fig. 1
机译:#$%^&* AU2020100078A420200213.pdf #####抽象在该专利中,基于Nesterov梯度法的加速分布式策略用于求解随机无向网络多主体中定义的凸优化问题系统提出。该算法主要包括五个部分:确定参数;确定参数。变量初始化交流信息;计算梯度;更新变量。通过实现提到一个单一的双随机权重矩阵,该算法在当前提出发明是为了统一,概括和提高某些典型精确分布的收敛速度一阶算法。在全局目标函数是强凸且每个局部目标函数都有Lipschitz连续梯度,所提出的算法可以线性当最大步长为正时,ly适当地收敛到全局优化解决方案并且小于明确估算的上限,并且最大动量参数为负数,并且小于最大步长确定的上限。本发明在大规模机器学习中有着广泛的应用。1/4(开始)每个代理设置k--0和最大数量迭代,kmax每个代理初始化局部变量计算系统参数选择一个步长,然后动量参数根据计算参数每个代理商都收到来自其变量邻居并发送变数邻居每个代理更新变量和计算渐变每个代理设置k–k + 1k> k.x?Nÿ发送图。1

著录项

  • 公开/公告号AU2020100078A4

    专利类型

  • 公开/公告日2020-02-13

    原文格式PDF

  • 申请/专利权人 SOUTHWEST UNIVERSITY;

    申请/专利号AU20200100078

  • 申请日2020-01-16

  • 分类号G06F17/11;G06F9/46;

  • 国家 AU

  • 入库时间 2022-08-21 11:11:37

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号