首页> 外文学位 >Kernel optimization and distributed learning algorithms for support vector machines.
【24h】

Kernel optimization and distributed learning algorithms for support vector machines.

机译:支持向量机的内核优化和分布式学习算法。

获取原文
获取原文并翻译 | 示例

摘要

The support vector machine (SVM) has been one of the most successful tools for pattern recognition and function estimation in the recent ten years. The underlying training problem can be formulated as a large quadratic programming problem that can be efficiently solved via existing decomposition algorithms. This assumes, however, that the kernel function and all the parameters are given and the training vectors can be locally accessed. These assumptions do not hold for classification problems with heterogeneous features or distributed training data. The purpose of this thesis is to address these problems by developing methods for kernel optimization and distributed learning.; Recent advances in kernel machine algorithms based on convex optimization have made it easier to incorporate information from training samples with little user interaction. As a first contribution, we generalize kernel estimation techniques for binary classification to multi-class classification problems. The kernel optimization problem for multi-class SVM is formulated as a semi-definite programming problem (SDP). A decomposition method is proposed to reduce the computational complexity of solving the SDP. As a further step of generalization, we consider kernel optimization for support vector regression (SVR). The proposed kernel optimization methods are applied to retina ganglion cell neuron signal analysis.; The distributed SVM training algorithm proposed in this thesis is based on a simple idea of exchanging support vectors over a strongly connected network. The properties of the algorithm in various configurations have been analyzed. We also propose a randomized parallel SVM that uses randomized sampling and has a provably fast average convergence rate.; Finally, we discuss the maximum likelihood estimation of Gaussian mixture models with known variances. The problem is formulated as a bi-concave maximization problem. We work out the details of the generalized Benders decomposition for calculating the global optimum of the bi-convex problem. Simple numerical results are presented to demonstrate the advantage over the widely used Expectation-Maximization (EM) algorithm.
机译:支持向量机(SVM)在最近十年中已成为模式识别和功能估计最成功的工具之一。可以将基础训练问题表述为可以通过现有分解算法有效解决的大型二次规划问题。但是,这假定已经给出了内核函数和所有参数,并且可以在本地访问训练向量。这些假设不适用于具有异构特征或分布式训练数据的分类问题。本文的目的是通过开发内核优化和分布式学习的方法来解决这些问题。基于凸优化的内核机器算法的最新进展使得在几乎没有用户交互的情况下合并训练样本信息变得更加容易。作为第一个贡献,我们将针对二分类的核估计技术推广到多分类问题。将多类SVM的内核优化问题表述为半定编程问题(SDP)。提出了一种分解方法来降低求解SDP的计算复杂度。作为一般化的进一步步骤,我们考虑对支持向量回归(SVR)进行内核优化。所提出的内核优化方法应用于视网膜神经节细胞神经元信号分析。本文提出的分布式SVM训练算法是基于在强连接网络上交换支持向量的简单思想。分析了该算法在各种配置中的特性。我们还提出了一种随机并行SVM,它使用随机抽样并且具有可证明的快速平均收敛速度。最后,我们讨论了具有已知方差的高斯混合模型的最大似然估计。该问题被表述为双凹最大化问题。我们计算出广义Benders分解的细节,以计算双凸问题的全局最优值。给出了简单的数值结果,以证明与广泛使用的期望最大化(EM)算法相比的优势。

著录项

  • 作者

    Lu, Yumao.;

  • 作者单位

    University of California, Los Angeles.;

  • 授予单位 University of California, Los Angeles.;
  • 学科 Statistics.; Operations Research.; Computer Science.
  • 学位 Ph.D.
  • 年度 2006
  • 页码 130 p.
  • 总页数 130
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 统计学;运筹学;自动化技术、计算机技术;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号