【24h】

Evolutionary and Constructive Approaches to Supervised Learning in Hybrid Neural Networks

机译:混合神经网络中监督学习的进化和建构方法

获取原文
获取原文并翻译 | 示例

摘要

Single hidden layer neural networks with supervised learning have been successfully applied to approximate unknown mappings defined in compact functional spaces. The most advanced results also give rates of convergence, stipulating how many hidden neurons, with a given activation function, should be used to achieve a specific order of approximation. However, independently of the activation function employed, these connectionist models for function approximation suffer from a severe limitation: all hidden neurons use the same activation function. If each activation function of a hidden neuron is properly and automatically defined, then better rates of convergence will be achieved. This is exactly the purpose of constructive learning using projection pursuit techniques, where each hidden neuron presents a particular activation function. In spite of its greater flexibility, the resulting transfer function associated with this constructive learning process still presents a strong restriction: additive composition is the only way to combine the possibly distinct activation functions. So, we propose here a hybrid composition for the combination of the activation functions, that accepts also the multiplicative composition as a candidate, besides the additive one. Two alternative implementations toward optimal design are proposed: one based on evolutionary computation and nonlinear optimization techniques, and another based on an extended version of projection pursuit learning. Although demanding a higher computational cost to obtain the solution, the amount of computational resources required by the resulting neural network architecture is impressive inferior, when compared to the solution produced by traditional supervised learning approaches.
机译:具有监督学习的单隐藏层神经网络已成功应用于紧凑功能空间中定义的近似未知映射。最先进的结果还给出了收敛速度,规定应使用多少个具有给定激活功能的隐藏神经元来实现特定的近似阶数。但是,与所采用的激活函数无关,这些函数逼近的连接主义模型受到严重限制:所有隐藏的神经元都使用相同的激活函数。如果正确,自动地定义了隐藏神经元的每个激活功能,则可以实现更高的收敛速度。这正是使用投影追踪技术进行建构性学习的目的,其中每个隐藏的神经元都表现出特定的激活功能。尽管具有更大的灵活性,但与此构造学习过程相关的最终传递函数仍然存在强大的局限性:添加成分是组合可能不同的激活函数的唯一方法。因此,我们在这里提出一种用于激活功能组合的混合成分,除了加性成分外,还接受乘法成分作为候选成分。提出了两种针对最佳设计的替代实现方式:一种基于进化计算和非线性优化技术,另一种基于投影追踪学习的扩展版本。尽管需要更高的计算成本来获得解决方案,但是与传统的有监督学习方法所产生的解决方案相比,最终的神经网络体系结构所需的计算资源量要低得多。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号