首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Deep Models of Interactions Across Sets
【24h】

Deep Models of Interactions Across Sets

机译:跨集合交互的深度模型

获取原文
       

摘要

We use deep learning to model interactions across two or more sets of objects, such as user{–}movie ratings or protein{–}drug bindings. The canonical representation of such interactions is a matrix (or tensor) with an exchangeability property: the encoding’s meaning is not changed by permuting rows or columns. We argue that models should hence be Permutation Equivariant (PE): constrained to make the same predictions across such permutations. We present a parameter-sharing scheme and prove that it is maximally expressive under the PE constraint. This scheme yields three benefits. First, we demonstrate performance competitive with the state of the art on multiple matrix completion benchmarks. Second, our models require a number of parameters independent of the numbers of objects and thus scale well to large datasets. Third, models can be queried about new objects that were not available at training time, but for which interactions have since been observed. We observed surprisingly good generalization performance on this matrix extrapolation task, both within domains (e.g., new users and new movies drawn from the same distribution used for training) and even across domains (e.g., predicting music ratings after training on movie ratings).
机译:我们使用深度学习来模拟两组或更多组对象之间的交互,例如用户{–}电影评分或蛋白质{–}药物绑定。此类交互的规范表示形式是具有可交换性属性的矩阵(或张量):排列行或列不会改变编码的含义。因此,我们认为模型应该是置换等效变量(PE):必须对此类置换进行相同的预测。我们提出了一个参数共享方案,并证明了它在PE约束下具有最大的表现力。该方案产生三个好处。首先,我们在多个矩阵完成基准上展示了与最新技术相比具有竞争力的性能。其次,我们的模型需要许多与对象数量无关的参数,因此可以很好地扩展到大型数据集。第三,可以查询模型,这些模型在训练时尚不可用,但是已经观察到了相互作用。我们在此矩阵外推任务中观察到了令人惊讶的良好泛化性能,无论是在域内(例如,新用户和从用于训练的相同分布中提取的新电影)还是跨域(例如,在对电影分级进行培训后预测音乐分级)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号