首页> 美国卫生研究院文献>Frontiers in Computational Neuroscience >Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation
【2h】

Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation

机译:平衡传播:弥合基于能量的模型与反向传播之间的鸿沟

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution) toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal “back-propagated” during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task.
机译:我们介绍平衡传播,这是基于能量的模型的学习框架。它仅涉及一种神经计算,在第一阶段(进行预测时)和第二阶段训练(在揭示目标或预测错误后)中都进行。尽管此算法像反向传播一样计算目标函数的梯度,但对于第二阶段并不需要特殊的计算或电路,因为第二阶段隐式地传播了误差。平衡传播在解决两种算法的理论问题的同时,还与对比性的Hebbian学习和对比性的散度具有相似之处:我们的算法计算出定义明确的目标函数的梯度。因为目标函数是根据局部扰动定义的,所以平衡传播的第二阶段仅对应于将预测(固定点或固定分布)推向减小预测误差的配置。在递归多层监督网络的情况下,输出单元在第二阶段稍微向其目标微移,并且在输出层引入的扰动在隐藏层中向后传播。我们显示,当突触更新对应于依赖于尖峰时序的可塑性的标准形式时,在第二个阶段“反向传播”的信号对应于误差导数的传播并编码目标函数的梯度。由于泄漏积分神经计算在我们的模型中既执行推理又进行错误的反向传播,因此可以用大脑实现与反向传播类似的机制,这一工作变得更加合理。这两个阶段之间唯一的局部差异是是否允许突触改变。我们还通过实验表明,可以通过平衡不变MNIST任务上的均衡传播来训练具有1、2和3个隐藏层的多层循环连接网络。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号