...
首页> 外文期刊>Knowledge-Based Systems >Multi-component transfer metric learning for handling unrelated source domain samples
【24h】

Multi-component transfer metric learning for handling unrelated source domain samples

机译:处理无关源域样本的多组分传输度量学习

获取原文
获取原文并翻译 | 示例
           

摘要

Transfer learning (TL) is a machine learning paradigm designed for the problem where the training and test data are from different domains. Existing TL approaches mostly assume that training data from the source domain are collected from multiple views or devices. However, in practical applications, a sample in a target domain often only corresponds to a specific view or device. Without the ability to mitigate the influence of the many unrelated samples, the performance of existing TL approaches may deteriorate for such learning tasks. This problem will be exacerbated if the intrinsic relationships among the source domain samples are unclear. Currently, there is no mechanism for determining the intrinsic characteristics of samples in order to treat them differently during TL. The source domain samples that are not related to the test data not only incur computational overhead, but may result in negative transfer. We propose the multi-component transfer metric learning (MCTML) method to address this challenging research problem. Unlike previous metric-based transfer learning which are only capable of using one metric to transform all the samples, MCTML automatically extracts distinct components from the source domain and learns one metric for each component. For each component, MCTML learns the importance of that component in terms of its predictive power based on the Mahalanobis distance metrics. The optimized combination of components are then used to predict the test data collaboratively. Extensive experiments on public datasets demonstrates its effectiveness in knowledge transfer under this challenging condition. (C) 2020 Elsevier B.V. All rights reserved.
机译:转移学习(TL)是一种机器学习范例,专为培训和测试数据来自不同域的问题而设计。现有的TL方法主要假设来自源域的训练数据从多个视图或设备收集。然而,在实际应用中,目标域中的样本通常仅对应于特定视图或设备。在没有减轻许多不相关的样本的影响的能力,现有TL方法的性能可能会恶化的这种学习任务。如果源域样本中的内在关系尚不清楚,则会加剧此问题。目前,没有机制来确定样品的内在特征,以便在TL期间以不同方式治疗它们。与测试数据无关的源域样本不仅导致计算开销,而且可能导致负转移。我们提出了多组分转移度量学习(MCTML)方法来解决这一具有挑战性的研究问题。与仅能够使用一个度量标准的基于度量的传输学习不同,MCTML自动从源域中提取不同的分量,并为每个组件学习一个度量。对于每个组件,MCTML在基于Mahalanobis距离指标的预测功率方面,从而了解该组件的重要性。然后使用组件的优化组合来协作预测测试数据。在公共数据集上的广泛实验证明了在这种具有挑战性条件下的知识转移中的有效性。 (c)2020 Elsevier B.v.保留所有权利。

著录项

  • 来源
    《Knowledge-Based Systems》 |2020年第5期|106132.1-106132.11|共11页
  • 作者单位

    Foshan Univ Sch Elect & Informat Engn Foshan 528000 Peoples R China;

    Nanyang Technol Univ Joint NTU UBC Res Ctr Excellence Act Living Elder Singapore 639798 Singapore|South China Univ Technol Sch Software Engn Guangzhou 510006 Peoples R China;

    Nanyang Technol Univ Sch Comp Sci & Engn Singapore 639798 Singapore;

    Univ Hong Kong Dept Math Hong Kong Peoples R China;

    WeBank Dept AI Shenzheng 518000 Peoples R China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Transfer learning; Metric learning; Component; Mahalanobis distance; Weight matrix;

    机译:转移学习;度量学习;组件;mahalanobis距离;重量矩阵;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号