首页> 外文期刊>Pattern recognition letters >Proximal maximum margin matrix factorization for collaborative filtering
【24h】

Proximal maximum margin matrix factorization for collaborative filtering

机译:近邻最大余量矩阵分解用于协同过滤

获取原文
获取原文并翻译 | 示例
       

摘要

Maximum Margin Matrix Factorization (MMMF) has been a successful learning method in collaborative filtering research. For a partially observed ordinal rating matrix, the focus is on determining low-norm latent factor matrices U (of users) and V (of items) so as to simultaneously approximate the observed entries under some loss measure and predict the unobserved entries. When the rating matrix contains only two levels (+/- 1), rows of V can be viewed as points in k-dimensional space and rows of U as decision hyperplanes in this space separating +1 entries from 1 entries. The concept of optimizing a loss function to determine the separating hyperplane is prevalent in support vector machines (SVM) research and when hinge/smooth hinge loss is used, the hyperplanes act as a maximum-margin separator. In MMMF, a rating matrix with multiple discrete values is treated by specially extending hinge loss function to suit multiple levels. MMMF is an efficient technique for collaborative filtering but it has several shortcomings. A prominent shortcoming is an overfitting problem wherein if learning iteration is prolonged to decrease the training error the generalization error grows. In this paper, we propose an alternative and new maximum margin factorization scheme for discrete-valued rating matrix to overcome the problem of overfitting. Our work draws motivation from a recent work on proximal support vector machines (PSVMs) wherein two parallel hyperplanes are used for binary classification and points are classified by assigning them to the class corresponding to the closest of two parallel hyperplanes. In other words, proximity to decision hyperplane is used as the classifying criterion. We show that a similar concept can be used to factorize the rating matrix if the loss function is suitably defined. The present scheme of matrix factorization has advantages over MMMF (similar to the advantages of PSVM over standard SVM). We validate our hypothesis by carrying out experiments on real and synthetic datasets. (C) 2016 Elsevier B.V. All rights reserved.
机译:最大余量矩阵分解(MMMF)是协作过滤研究中一种成功的学习方法。对于部分观测的序数等级矩阵,重点是确定(用户)和用户(项目)的低范数潜在因子矩阵,以便在某种损失测度下同时近似观测到的条目并预测未观察到的条目。当评级矩阵仅包含两个级别(+/- 1)时,可以将V行视为k维空间中的点,而将U行视为该空间中的决策超平面,将+1项与1项分开。优化损失函数以确定分离的超平面的概念在支持向量机(SVM)研究中很普遍,当使用铰链/平滑铰链损耗时,超平面充当最大边距分隔符。在MMMF中,通过专门扩展铰链损耗函数以适合多个级别来处理具有多个离散值的评级矩阵。 MMMF是一种用于协作过滤的有效技术,但它有几个缺点。突出的缺点是过拟合的问题,其中如果延长学习迭代以减少训练误差,则泛化误差会增大。在本文中,我们为离散值评级矩阵提出了另一种新的最大边际因式分解方案,以克服过度拟合的问题。我们的工作从最近的近端支持向量机(PSVM)的工作中汲取了动力,其中两个平行超平面用于二进制分类,并且通过将点分配给对应于两个平行超平面的最接近的类别来对点进行分类。换句话说,接近决策超平面被用作分类标准。我们表明,如果适当定义了损失函数,则可以使用相似的概念来分解等级矩阵。本矩阵分解的方案具有优于MMMF的优点(类似于PSVM与标准SVM的优点)。我们通过对真实和合成数据集进行实验来验证我们的假设。 (C)2016 Elsevier B.V.保留所有权利。

著录项

  • 来源
    《Pattern recognition letters》 |2017年第15期|62-67|共6页
  • 作者单位

    Univ Hyderabad, Sch Comp & Informat Sci, Artificial Intelligence Lab, Hyderbad 500046, Andhra Pradesh, India;

    Univ Hyderabad, Sch Comp & Informat Sci, Artificial Intelligence Lab, Hyderbad 500046, Andhra Pradesh, India|Cent Univ Rajasthan, Ajmer, Rajasthan, India;

    Univ Hyderabad, Sch Comp & Informat Sci, Artificial Intelligence Lab, Hyderbad 500046, Andhra Pradesh, India;

    Univ Hyderabad, Sch Comp & Informat Sci, Artificial Intelligence Lab, Hyderbad 500046, Andhra Pradesh, India;

    Univ Hyderabad, Sch Comp & Informat Sci, Artificial Intelligence Lab, Hyderbad 500046, Andhra Pradesh, India;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Collaborative filtering; Matrix completion; Matrix factorization;

    机译:协同过滤;矩阵完成;矩阵分解;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号