首页> 外文会议>Asian conference on computer vision >Adaptive Unsupervised Multi-view Feature Selection for Visual Concept Recognition
【24h】

Adaptive Unsupervised Multi-view Feature Selection for Visual Concept Recognition

机译:自适应无监督的多视图功能选择,可视概念识别

获取原文

摘要

To reveal and leverage the correlated and complemental information between different views, a great amount of multi-view learning algorithms have been proposed in recent years. However, unsupervised feature selection in multi-view learning is still a challenge due to lack of data labels that could be utilized to select the discriminative features. Moreover, most of the traditional feature selection methods are developed for the single-view data, and are not directly applicable to the multi-view data. Therefore, we propose an unsupervised learning method called Adaptive Unsupervised Multi-view Feature Selection (AUMFS) in this paper. AUMFS attempts to jointly utilize three kinds of vital information, i.e., data cluster structure, data similarity and the correlations between different views, contained in the original data together for feature selection. To achieve this goal, a robust sparse regression model with the l_(2,1)-norm penalty is introduced to predict data cluster labels, and at the same time, multiple view-dependent visual similar graphs are constructed to flexibly model the visual similarity in each view. Then, AUMFS integrates data cluster labels prediction and adaptive multi-view visual similar graph learning into a unified framework. To solve the objective function of AUMFS, a simple yet efficient iterative method is proposed. We apply AUMFS to three visual concept recognition applications (i.e., social image concept recognition, object recognition and video-based human action recognition) on four benchmark datasets. Experimental results show the proposed method significantly outperforms several state-of-the-art feature selection methods. More importantly, our method is not very sensitive to the parameters and the optimization method converges very fast.
机译:为了在不同视图之间揭示和利用相关性和互补信息,近年来提出了大量的多视图学习算法。然而,由于缺乏可用于选择鉴别特征的数据标签,多视图学习中的无监督特征选择仍然是一个挑战。此外,大多数传统的特征选择方法都是为单视数据开发的,并且不可直接适用于多视图数据。因此,我们提出了一种无监督的学习方法,称为自适应无监督的多视图特征选择(AUMF)。 AUMFS尝试共同利用三种重要信息,即数据集群结构,数据相似性以及不同视图之间的相关性,其中包含在原始数据中的特征选择。为了实现这一目标,引入了一种具有L_(2,1)-norm惩罚的强大稀疏回归模型来预测数据集群标签,同时,构造多视图相关的视觉相似图形以灵活地模拟视觉相似度在每次视图中。然后,AUMFS将数据集群标签预测和自适应多视图视觉类似图类似的图形学习集成到统一的框架中。为了解决AUMF的目标函数,提出了一种简单但有效的迭代方法。我们在四个基准数据集中将AUMFS应用于三个视觉概念识别应用(即,社会形象概念识别,对象识别和视频的人为动作识别)。实验结果表明,所提出的方法显着优于几种最先进的特征选择方法。更重要的是,我们的方法对参数不太敏感,优化方法收敛非常快。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号