【24h】

Guest Speakers

机译:客座演讲者

获取原文
获取原文并翻译 | 示例

摘要

Most of the machine learning methods are based on optimization models. Constructing the machine learning model by minimizing the loss function is a basic example to express the relation between machine learning and optimization. With the growing number of data and increase in dimension require sparse models in order to reduce the computation cost. While sparse models are computationally efective in practice, it can be obtained through zero norm or one norm approximations by non-convex optimization models. The challenge appears here at the non-convexity structure of the problem. Recently, learning algorithms based on such sparse models with nonconvexity approximations have gained an interest in literature. In this talk, I will give recent approximations and sparse learning models exist in literature specifically in feature selection, ensemble learning and deep learning. Since ensemble learning aims to aggregate the most accurate and diverse learners to produce the best final decision, selecting the best learning models out of the ensemble becomes a sparse learning similar to feature selection algorithms. The connections between deep learning and sparse learning in a same fashion will be explained throughout this talk. As an example to such methodologies, some ongoing biomedical projects at Bahcesehir University Computer Vision Laboratory on machine learning will be presented in the end of this talk.
机译:大多数机器学习方法都基于优化模型。通过最小化损失函数构建机器学习模型是表达机器学习与优化之间关系的一个基本示例。随着数据数量的增加和维度的增加,需要稀疏模型以降低计算成本。尽管稀疏模型实际上在计算上有效,但是可以通过非凸优化模型通过零范数或一个范数逼近来获得。这里的挑战出现在问题的非凸性结构上。近年来,基于这种具有非凸近似性的稀疏模型的学习算法引起了文学兴趣。在本次演讲中,我将给出最近的近似值,并且稀疏的学习模型特别是在特征选择,整体学习和深度学习方面存在于文献中。由于集成学习旨在聚集最准确和最多样化的学习者以产生最佳的最终决策,因此从集成中选择最佳学习模型将成为类似于特征选择算法的稀疏学习。整个演讲将解释以相同方式进行的深度学习和稀疏学习之间的联系。作为这种方法的一个例子,本演讲结束时将介绍Bahcesehir大学计算机视觉实验室正在进行的有关机器学习的生物医学项目。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号