首页> 外文会议>American Control Conference >Nonasymptotic convergence rates for cooperative learning over time-varying directed graphs
【24h】

Nonasymptotic convergence rates for cooperative learning over time-varying directed graphs

机译:时变有向图上合作学习的非渐近收敛速度

获取原文

摘要

We study the problem of cooperative learning with a network of agents where some agents repeatedly access information about a random variable with unknown distribution. The group objective is to globally agree on a joint hypothesis (distribution) that best describes the observed data at all nodes. The agents interact with their neighbors in an unknown sequence of time-varying directed graphs. Following the pioneering work of Jadbabaie, Molavi, Sandroni, and Tahbaz-Salehi and others, we propose local learning dynamics which combine Bayesian updates at each node with a local aggregation rule of private agent signals. We show that these learning dynamics drive all agents to the set of hypotheses which best explain the data collected at all nodes as long as the sequence of interconnection graphs is uniformly strongly connected. Our main result establishes a non-asymptotic, explicit, geometric convergence rate for the learning dynamic.
机译:我们研究了使用代理网络进行协作学习的问题,其中一些代理反复访问有关未知分布的随机变量的信息。该小组的目标是在全球范围内就最能描述所有节点观测数据的联合假设(分布)达成一致。代理在未知的时变有向图序列中与其邻居互动。继Jadbabaie,Molavi,Sandroni和Tahbaz-Salehi等人的开创性工作之后,我们提出了局部学习动态机制,该机制将每个节点的贝叶斯更新与私有代理信号的局部聚集规则相结合。我们证明,只要互连图的序列一致地被牢固地连接,这些学习动力就将所有主体驱向假设的集合,这些假设最好地解释了在所有节点处收集的数据。我们的主要结果为学习动态建立了非渐近的,显式的,几何收敛速度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号