首页> 外国专利> Scalable and efficient distributed auto-tuning of machine learning and deep learning models

Scalable and efficient distributed auto-tuning of machine learning and deep learning models

机译:可扩展高效的机器学习和深度学习模型的分布式自动调整

摘要

Herein are techniques for automatic tuning of hyperparameters of machine learning algorithms. System throughput is maximized by horizontally scaling and asynchronously dispatching the configuration, training, and testing of an algorithm. In an embodiment, a computer stores a best cost achieved by executing a target model based on best values of the target algorithm's hyperparameters. The best values and their cost are updated by epochs that asynchronously execute. Each epoch has asynchronous costing tasks that explore a distinct hyperparameter. Each costing task has a sample of exploratory values that differs from the best values along the distinct hyperparameter. The asynchronous costing tasks of a same epoch have different values for the distinct hyperparameter, which accomplishes an exploration. In an embodiment, an excessive update of best values or best cost creates a major epoch for exploration in a subspace that is more or less unrelated to other epochs, thereby avoiding local optima.
机译:这里是用于自动调谐机器学习算法的超参数的技术。系统吞吐量通过水平缩放和异步调度算法的配置,培训和测试来最大化。在一个实施例中,计算机存储通过基于目标算法的超参数的最佳值执行目标模型来实现最佳成本。最佳值及其成本由Enynchone执行的时期更新。每个时代都有异步成本调整任务,探索一个不同的普通的超参数。每个成本调整任务都有一个探索性值的样本,其不同于不同的超参数的最佳值。同一时代的异步成本调整任务对于不同的超参数的不同值,这实现了探索。在一个实施例中,最佳值或最佳成本的过度更新会在与其他时期的子空间中勘探创建一个主要的时期,从而避免局部最佳。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号