首页> 外文会议>International Conference on Neural Information Processing >Accelerated Training Algorithms of General Fuzzy Min-Max Neural Network Using GPU for Very High Dimensional Data
【24h】

Accelerated Training Algorithms of General Fuzzy Min-Max Neural Network Using GPU for Very High Dimensional Data

机译:基于高维数据的GPU加速训练综合模糊MIN-MAX神经网络的训练算法

获取原文

摘要

One of the issues of training a general fuzzy min-max neural network (GFMM) on very high dimensional data is a long training time even if the number of samples is relatively low. This is a quite common problem shared by many prototype-based methods requiring frequently repeated distance or similarity calculations. This paper proposes the method of accelerating the learning algorithms of the GFMM by, first, reformulating and representing them in a format allowing for their parallel execution and subsequently leveraging the computational power of the graphics processing unit (GPU). The original implementation of GFMM is modified by matrix computations to be executed on the GPU for the very high-dimensional datasets. The empirical results on two very high-dimensional datasets indicated that the training and testing processes performed on Nvidia Quadro P5000 GPU were from 10 to 35 times faster compared to those running serially on the Xeon CPU while retaining the same classification accuracy.
机译:一对非常高维数据训练广义模糊最小最大神经网络(GFMM)的问题是,即使样本的数量是比较低的很长的训练时间。这是通过要求频繁重复的距离或相似性计算许多基于原型的方法共享一个相当普遍的问题。本文提出了通过加速GFMM的学习算法中,首先,重整和格式允许其并行执行代表他们,并且随后利用所述图形的计算能力处理单元(GPU)的方法。原来实行GFMM的由矩阵计算修改为在GPU上的非常高维数据集来执行。上两个非常高维数据集上的经验结果表明,在NVIDIA的Quadro P5000 GPU执行的训练和测试过程中来自相比,这些至强CPU上运行串联,同时保持相同的分类准确度快10到35倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号