首页> 外文期刊>Neurocomputing >Empirical kernel map-based multilayer extreme learning machines for representation learning
【24h】

Empirical kernel map-based multilayer extreme learning machines for representation learning

机译:基于经验核图的多层极限学习机用于表示学习

获取原文
获取原文并翻译 | 示例
           

摘要

Recently, multilayer extreme learning machine (ML-ELM) and hierarchical extreme learning machine (HELM) were developed for representation learning whose training time can be reduced from hours to seconds compared to traditional stacked autoencoder (SAE). However, there are three practical issues in ML-ELM and H-ELM: (1) the random projection in every layer leads to unstable and suboptimal performance; (2) the manual tuning of the number of hidden nodes in every layer is time-consuming; and (3) under large hidden layer, the training time becomes relatively slow and a large storage is necessary. More recently, issues (1) and (2) have been resolved by kernel method, namely, multilayer kernel ELM (ML-KELM), which encodes the hidden layer in form of a kernel matrix (computed by using kernel function on the input data), but the storage and computation issues for kernel matrix pose a big challenge in large-scale application. In this paper, we empirically show that these issues can be alleviated by encoding the hidden layer in the form of an approximate empirical kernel map (EKM) computed from low-rank approximation of the kernel matrix. This proposed method is called ML-EKM-ELM, whose contributions are: (1) stable and better performance is achieved under no random projection mechanism; (2) the exhaustive manual tuning on the number of hidden nodes in every layer is eliminated; (3) EKM is scalable and produces a much smaller hidden layer for fast training and low memory storage, thereby suitable for large-scale problems. Experimental results on benchmark datasets demonstrated the effectiveness of the proposed ML-EKM-ELM. As an illustrative example, on the NORB dataset, ML-EKM-ELM can be respectively up to 16 times and 37 times faster than ML-KELM for training and testing with a little loss of accuracy of 0.35%, while the memory storage can be reduced up to 1/9. (C) 2018 Elsevier B.V. All rights reserved.
机译:最近,开发了多层极限学习机(ML-ELM)和分层极限学习机(HELM)用于表示学习,与传统的堆叠式自动编码器(SAE)相比,其训练时间可以从几小时减少到几秒钟。但是,ML-ELM和H-ELM中存在三个实际问题:(1)每层中的随机投影会导致性能不稳定和次优; (2)手动调整每层中的隐藏节点数很费时; (3)在较大的隐蔽层下,训练时间相对较慢,需要大量的存储空间。最近,问题(1)和(2)已通过内核方法解决,即多层内核ELM(ML-KELM),它以内核矩阵的形式对隐藏层进行编码(通过对输入数据使用内核函数来计算) ),但是内核矩阵的存储和计算问题在大规模应用中提出了很大的挑战。在本文中,我们从经验上表明,通过以内核矩阵的低秩近似计算的近似经验内核图(EKM)形式对隐藏层进行编码,可以缓解这些问题。该方法被称为ML-EKM-ELM,其贡献为:(1)在没有随机投影机制的情况下实现了稳定和更好的性能; (2)消除了对每一层隐藏节点数量的详尽的手动调整; (3)EKM具有可扩展性,并且可以产生较小的隐藏层,以进行快速训练和低内存存储,从而适合大规模问题。在基准数据集上的实验结果证明了所提出的ML-EKM-ELM的有效性。作为示例,在NORB数据集上,ML-EKM-ELM的速度分别比ML-KELM快16倍和37倍,而训练和测试的准确度仅损失0.35%,而内存存储可以减少到1/9。 (C)2018 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号