首页> 外文会议>International Conference on Intelligent Computer Communication and Processing >Improving the Accuracy of Deep Neural Networks Through Developing New Activation Functions
【24h】

Improving the Accuracy of Deep Neural Networks Through Developing New Activation Functions

机译:通过开发新的激活函数来提高深度神经网络的准确性

获取原文

摘要

Without activation functions, it would be only possible for the neural network to learn very basic tasks, so the activation function is a key point in the neural network’s architecture. The function allows us to learn more complicated tasks and also it impacts the performance to obtain the outcome. So, activation functions represent the continuous and widespread interest of research to identify the most suitable activation function to a specific task. In this paper, we propose four activation functions that bring improvements for different datasets in the Computer Vision task. These functions are a combination of the popular activation functions such as sigmoid, bipolar sigmoid, Rectified Linear Unit (ReLU), and tangent (tanh). By allowing activation functions to be learnable we obtain models more robust. To validate these functions, we tested using more datasets and more architectures with different depths, showing that their properties are significant and useful. Also, we compared them with other powerful activation functions to see how our proposed activation functions impact accuracy.
机译:没有激活功能,神经网络将只能学习非常基本的任务,因此激活功能是神经网络体系结构中的关键点。该功能使我们能够学习更复杂的任务,并且还影响获得结果的性能。因此,激活功能代表了对确定特定任务最合适的激活功能的研究的持续和广泛的兴趣。在本文中,我们提出了四个激活函数,这些函数为Computer Vision任务中的不同数据集带来了改进。这些功能是流行的激活功能的组合,例如S型,双极S型,整流线性单位(ReLU)和切线(tanh)。通过允许激活函数是可学习的,我们可以获得更健壮的模型。为了验证这些功能,我们使用更多的数据集和不同深度的更多体系结构进行了测试,表明它们的属性是重要且有用的。此外,我们将它们与其他强大的激活功能进行了比较,以了解我们提出的激活功能如何影响准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号