首页> 外文期刊>Information Forensics and Security, IEEE Transactions on >Dynamic Differential Privacy for ADMM-Based Distributed Classification Learning
【24h】

Dynamic Differential Privacy for ADMM-Based Distributed Classification Learning

机译:基于ADMM的分布式分类学习的动态差异隐私

获取原文
获取原文并翻译 | 示例
           

摘要

Privacy-preserving distributed machine learning becomes increasingly important due to the recent rapid growth of data. This paper focuses on a class of regularized empirical risk minimization machine learning problems, and develops two methods to provide differential privacy to distributed learning algorithms over a network. We first decentralize the learning algorithm using the alternating direction method of multipliers, and propose the methods of dual variable perturbation and primal variable perturbation to provide dynamic differential privacy. The two mechanisms lead to algorithms that can provide privacy guarantees under mild conditions of the convexity and differentiability of the loss function and the regularizer. We study the performance of the algorithms, and show that the dual variable perturbation outperforms its primal counterpart. To design an optimal privacy mechanism, we analyze the fundamental tradeoff between privacy and accuracy, and provide guidelines to choose privacy parameters. Numerical experiments using customer information database are performed to corroborate the results on privacy and utility tradeoffs and design.
机译:由于最近数据的快速增长,保护隐私的分布式机器学习变得越来越重要。本文着重于一类正则化的经验风险最小化机器学习问题,并开发了两种为网络上的分布式学习算法提供差分隐私的方法。我们首先使用乘法器的交替方向方法对学习算法进行分散处理,并提出对偶变量摄动和原始变量摄动的方法,以提供动态差分隐私。这两种机制导致了算法,该算法可以在损失函数和正则化函数的凸性和可微性的温和条件下提供隐私保证。我们研究了算法的性能,并表明对偶变量摄动优于其原始对偶。为了设计最佳的隐私机制,我们分析了隐私和准确性之间的基本权衡,并提供了选择隐私参数的指南。使用客户信息数据库进行了数值实验,以证实有关隐私和公用事业权衡与设计的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号