...
首页> 外文期刊>IEICE Transactions on Information and Systems >Fast Iterative Mining Using Sparsity-Inducing Loss Functions
【24h】

Fast Iterative Mining Using Sparsity-Inducing Loss Functions

机译:使用稀疏性引起的损失函数进行快速迭代挖掘

获取原文
获取原文并翻译 | 示例
           

摘要

Apriori-based mining algorithms enumerate frequent patterns efficiently, but the resulting large number of patterns makes it dilli-cult to directly apply subsequent learning tasks. Recently, efficient iterative methods are proposed for mining discriminative patterns tor classification and regression. These methods iteratively execute discriminative pattern mining algorithm and update example weights to emphasize on examples which received large errors in the previous iteration. In this paper, we study a family of loss functions that induces sparsity on example weights. Most of the resulting example weights become zeros, so we can eliminate those examples from discriminative pattern mining, leading to a significant decrease in search space and time. In computational experiments we compare and evaluate various loss functions in terms of the amount of sparsity induced and resulting speed-up obtained.
机译:基于Apriori的挖掘算法可以有效地枚举频繁的模式,但是所产生的大量模式使直接应用后续学习任务变得困难。最近,提出了有效的迭代方法来挖掘分类和回归的判别模式。这些方法反复执行判别模式挖掘算法并更新示例权重,以强调在先前迭代中收到较大错误的示例。在本文中,我们研究了一系列损失函数,这些函数导致样本权重稀疏。大多数结果示例权重变为零,因此我们可以从歧视性模式挖掘中消除这些示例,从而显着减少搜索空间和时间。在计算实验中,我们根据引起的稀疏程度和获得的提速来比较和评估各种损失函数。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号