...
首页> 外文期刊>Frontiers in Computational Neuroscience >Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation
【24h】

Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation

机译:平衡传播:弥合基于能量的模型与反向传播之间的鸿沟

获取原文
           

摘要

We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution) toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal “back-propagated” during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task.
机译:我们介绍平衡传播,这是基于能量的模型的学习框架。它仅涉及一种神经计算,在第一阶段(进行预测时)和第二阶段训练(在揭示目标或预测错误之后)中进行。尽管此算法像反向传播一样计算目标函数的梯度,但对于第二阶段并不需要特殊的计算或电路,因为第二阶段隐含地传播误差。平衡传播在解决两种算法的理论问题的同时,还与对比性Hebbian学习和对比性散度具有相似之处:我们的算法计算出定义明确的目标函数的梯度。因为目标函数是根据局部扰动定义的,所以平衡传播的第二阶段仅对应于将预测(固定点或固定分布)推向减小预测误差的配置。在递归多层监督网络的情况下,输出单元在第二阶段稍微向其目标微移,并且在输出层引入的扰动在隐藏层中向后传播。我们显示,当突触更新对应于依赖于尖峰时序的可塑性的标准形式时,在第二个阶段“反向传播”的信号对应于误差导数的传播并编码目标函数的梯度。由于泄漏积分神经计算在我们的模型中既执行推理又执行错误的反向传播,因此可以通过大脑实施类似于反向传播的机制,这一工作变得更加合理。这两个阶段之间唯一的局部差异是是否允许突触变化。我们还通过实验表明,可以通过平衡不变MNIST任务上的均衡传播来训练具有1、2和3个隐藏层的多层循环连接网络。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号