【24h】

Federation learning optimization using distillation

机译:联邦学习优化使用蒸馏

获取原文

摘要

Federated learning is a special type of distributed machine learning that enables a large number of edge computing devices to train models collaboratively without sharing any private data. This method of decentralized training model data is always not local, which is private and safe. Provided a guarantee. However, because federated learning faces the challenges of heterogeneous problems: 1) heterogeneous models among devices; 2) differences in real data, which do not obey independent and identical distribution, resulting in poor performance of traditional federated learning algorithms. To solve the above problems, a distributed training method based on knowledge distillation is proposed. By introducing a personalized model on each device side, the personalized model is used to improve the performance of the global model on the device side, thereby improving the ability of the global model. The improvement of the performance of the local model benefits from the effect of knowledge distillation, which can guide the improvement of the global model by transferring “dark knowledge” between heterogeneous networks. Experimental results show that this method can significantly improve the accuracy of classification tasks, and at the same time meet the needs of heterogeneous users.
机译:联合学习是一种特殊类型的分布式机器学习,可以使大量边缘计算设备能够在不共享任何私人数据的情况下协同培训模型。这种分散培训模型数据的方法始终不是本地的,这是私有和安全的。提供了保证。但是,由于联合学习面临异构问题的挑战:1)设备之间的异构模型; 2)实际数据的差异,不服从独立和相同的分布,导致传统联合学习算法的表现不佳。为了解决上述问题,提出了一种基于知识蒸馏的分布式训练方法。通过在每个设备侧引入个性化模型,使用个性化模型用于提高设备侧的全局模型的性能,从而提高全球模型的能力。从知识蒸馏效果改善了本地模型的性能,这可以通过在异构网络之间转移“黑暗知识”来指导全球模型的改善。实验结果表明,该方法可以显着提高分类任务的准确性,同时满足异构用户的需求。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号