首页> 外文会议>IEEE Winter Conference on Applications of Computer Vision >Deep Learning on Small Datasets without Pre-Training using Cosine Loss
【24h】

Deep Learning on Small Datasets without Pre-Training using Cosine Loss

机译:在不使用余弦损失进行预训练的情况下对小型数据集进行深度学习

获取原文

摘要

Two things seem to be indisputable in the contemporary deep learning discourse: 1. The categorical cross-entropy loss after softmax activation is the method of choice for classification. 2. Training a CNN classifier from scratch on small datasets does not work well.In contrast to this, we show that the cosine loss function provides substantially better performance than crossentropy on datasets with only a handful of samples per class. For example, the accuracy achieved on the CUB- 200-2011 dataset without pre-training is by 30% higher than with the cross-entropy loss. Further experiments on other popular datasets confirm our findings. Moreover, we demonstrate that integrating prior knowledge in the form of class hierarchies is straightforward with the cosine loss and improves classification performance further.
机译:在当代深度学习中,似乎有两点是无可争辩的:1. softmax激活后的分类交叉熵损失是分类的选择方法。 2.从零开始在小型数据集上训练CNN分类器效果不佳。相比之下,我们证明,在每个类别仅包含少量样本的数据集上,余弦损失函数比交叉熵提供的性能要好得多。例如,在不进行预训练的情况下,在CUB-200-2011数据集上获得的准确度比交叉熵损失高30%。在其他流行的数据集上进行的进一步实验证实了我们的发现。此外,我们证明以类层次结构的形式集成先验知识可轻松实现余弦损失,并进一步提高了分类性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号